As neural networks continue to grow in popularity across a wide range of applications, the need for large, diverse datasets for training them has become increasingly critical. One such dataset that has gained prominence in recent years is Common Crawl, a web corpus of vast proportions which at the time of writing is unmatched by any other corpus. In this article, we’ll explore how Common Crawl can be used to train neural networks and what benefits it offers over other datasets.

What is Common Crawl?

Common Crawl is a nonprofit organization that aims to provide open access to web data. It accomplishes this by continuously crawling the internet and indexing the text from web pages. The resulting dataset, known as the Common Crawl corpus, is available for researchers and developers to use in their projects.

Common Crawl is a web corpus that contains a seemingly endless amount of text data in multiple languages. The corpus is created by continuously crawling the internet and indexing the text from web pages, making it a valuable resource for training language models. Compressed, each crawl occupies 10’s of terabytes of storage. To make the most of the common crawl dataset developers must understand how to efficiently pull, read and process the data.

Why Use Common Crawl for Neural Network Training?

There are several reasons why Common Crawl is a valuable resource for training neural networks. First, it provides a large and diverse dataset for training language models. Because neural networks require vast amounts of data to learn effectively, having access to a corpus as large as Common Crawl is essential.

Second, Common Crawl provides a diverse sample of natural language. Neural networks trained on Common Crawl data will be exposed to a wide range of topics and domains, making them better suited to handle a broader range of language tasks. Additionally, the diverse sample of natural language will ensure that neural networks trained on Common Crawl data are less likely to suffer from overfitting, a common problem when training on smaller or more focused datasets.

Finally, Common Crawl is freely available and accessible to researchers and developers worldwide. This means that the dataset can be used by anyone for any purpose, including commercial applications. The open access to the dataset encourages collaboration and innovation in the field of natural language processing and machine learning.

Web Archive types: WET Archives

WET files are a type of web archive format used by Common Crawl to store text content from web pages. The script uses the requests library to download a list of WET file paths for the March 2023 crawl from Common Crawl’s website. It then loops over each file path, downloads the corresponding WET file, decompresses it, and extracts its content using the warcio library. Finally, it decodes and prints the contents of the first three records (this can be changed) in each WET file. This code can be useful for researchers and developers who want to extract text data from Common Crawl for use in natural language processing or machine learning applications.

How to Access Common Crawl Archives

First, the Common Crawl corpus must be downloaded and preprocessed. The full dataset is vast, when compressed, it requires hundreds of terabytes of storage, Because of this common crawl have split the dataset into time frames and sections that can be downloaded and processed separately. First we specify which crawl we would like to recurse with the url variable

The code below imports the necessary libraries to download and process the web crawl data. It then sets the URL of the WET file paths for the March 2023 crawl, which contains the web crawl data in a compressed format.
Next, it downloads the list of WET file paths using the requests library and decompresses the file using gzip. The decompressed file contains a list of URLs of individual WET files.
After splitting the file content into individual file paths, the code loops over each file path, downloads the corresponding WET file, and prints its contents. The warcio library is used to create a WARC iterator, which iterates over the records in the WET file. The contents of the first three records are printed for demonstration purposes.

import requests
import gzip
from io import BytesIO
import warcio

# Set the URL of the WET file paths for the March 2023 crawl
url = 'https://data.commoncrawl.org/crawl-data/CC-MAIN-2023-06/wet.paths.gz'

# Download the list of WET file paths
response = requests.get(url)
compressed_file = response.content

# Decompress the file
file_content = gzip.decompress(compressed_file)

# Split the file content into individual file paths
file_paths = file_content.decode().split()
# Loop over each file path, download the corresponding WET file, and print its contents
for path in file_paths:
    # Construct the URL of the WET file
    print(path)
    wet_url = 'https://data.commoncrawl.org/' + path
    
    # Download the WET file
    response = requests.get(wet_url)
    compressed_file = response.content
    
    # Decompress the file
    file_content = gzip.decompress(compressed_file)
    
    # Create a WARC iterator
    records = warcio.ArchiveIterator(BytesIO(file_content))
    
    # Iterate over the records and print the contents of the first record
    for index, record in enumerate(records):
        print(record.content_stream().read().decode('utf-8','ignore'))
        if index >= 2 :
            # To keep the demo output brief, break out of the loops early
            break 

    break

Output

crawl-data/CC-MAIN-2023-06/segments/1674764494826.88/wet/CC-MAIN-20230126210844-20230127000844-00000.warc.wet.gz
Software-Info: ia-web-commons.1.1.10-SNAPSHOT-20230123022639
Extracted-Date: Thu, 09 Feb 2023 17:39:18 GMT
robots: checked via crawler-commons 1.4-SNAPSHOT (https://github.com/crawler-commons/crawler-commons)
isPartOf: CC-MAIN-2023-06
operator: Common Crawl Admin (info@commoncrawl.org)
description: Wide crawl of the web for January/February 2023
publisher: Common Crawl


Изменение цвета кальмара для камуфляжа заснято на видео
НЕ ПРОПУСТИ
Экс-президент Польши заявил об «уникальном шансе разобраться с Россией»
Названо оружие НАТО, способное долететь до Москвы и Санкт-Петербурга
Бизнесмен Пригожин обратился к Володину с просьбой ввести уголовную ответственность за дискредитацию участников боевых действий — Блокнот Россия
Депутатам Госдумы усложнили отдых за границей
«Я кровь не останавливаю, а пускаю её врагам»: Пригожин назвал различие между собой и Распутиным
Борис Джонсон неожиданно приехал на Украину, встретился с Зеленским и посетил Бучу
Беглов формирует свой личный «общак» — Блокнот Россия
Медведев назвал конфликт с Западом и Украиной новой Отечественной войной
«У меня к вам есть вопрос»: Евгений Пригожин написал письмо в Белый дом
«Свора кастрированных псов»: Медведев предупредил о появлении нового альянса против США
НОВОСТНОЙ ЖУРНАЛ
Главная
АВТО
...
暂无信息
信息为会员发布,本站不承担任何内容的法律责任,如您发现有侵犯您权益的内容,请联系我们删除。
版权声明隐私保护用户协议免责声明© 微信导航 all rights reserved.黑ICP备18006982号-6

Filtering

As you can see the output of the common crawl wet archives is irregular, multilingual and captures elements as well as text. This data will require filtering before training is complete. Filtering could be achieved via another neural network designed to extract english text that is over a threshold length.

After filtering the WET data using existing tools or custom scripts, you may want to further refine the data to meet your specific requirements. For example, you may want to exclude certain types of content, such as pages written in a particular language or containing specific keywords. Alternatively, you may want to prioritize certain types of content, such as news articles or blog posts.

To achieve this level of filtering, you can use more advanced techniques such as machine learning classifiers or natural language processing (NLP) algorithms. These tools can help you identify patterns and extract relevant information from the text, such as sentiment, topic, or author. By applying these techniques, you can create a more targeted and high-quality dataset for training your neural network.

Overall, the process of filtering the WET data involves a combination of manual and automated techniques, depending on your specific goals and resources. It requires careful planning and experimentation to find the right balance between relevance, quality, and scalability. However, with the right tools and techniques, you can create a powerful training dataset for your language model and unlock its full potential.

Training

Next, the filtered data is used to train a neural network. There are several types of neural networks that can be trained on Common Crawl data, including recurrent neural networks (RNNs), convolutional neural networks (CNNs), and transformers. The choice of neural network architecture will depend on the specific task the network is being trained to perform.

Once the neural network has been trained, it can be used to perform a wide range of language tasks, including language modeling, text classification, and machine translation. Because Common Crawl contains text in multiple languages, neural networks trained on the dataset can be used to perform these tasks in multiple languages, making them a valuable resource for multilingual applications.

Conclusion

Common Crawl is a valuable resource for training neural networks. The dataset provides a large and diverse sample of natural language, making it an excellent choice for training language models that can handle a wide range of language tasks. Additionally, the open access to the dataset encourages collaboration and innovation in the field of natural language processing and machine learning.

Using Common Crawl for neural network training involves several steps, including downloading and preprocessing the data and training the neural network. The choice of neural network architecture will depend on the specific task the network is being trained to perform.