You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.rst
+11-10
Original file line number
Diff line number
Diff line change
@@ -46,19 +46,24 @@ A Python package & command-line tool to gather text on the Web
46
46
Description
47
47
-----------
48
48
49
-
Trafilatura is a **Python package and command-line tool** designed gather text on the Web. It includes discovery, extraction and text processing components. Its main applications are **web crawling, downloads, scraping, and extraction** of main texts, comments and metadata. It aims at staying **handy and modular**: no database is required, the output can be converted to various commonly used formats.
49
+
Trafilatura is a **Python package and command-line tool** designed to gather text on the Web. It includes discovery, extraction and text processing components. Its main applications are **web crawling, downloads, scraping, and extraction** of main texts, metadata and comments. It aims at staying **handy and modular**: no database is required, the output can be converted to various commonly used formats.
50
50
51
51
Going from raw HTML to essential parts can alleviate many problems related to text quality, first by avoiding the **noise caused by recurring elements** (headers, footers, links/blogroll etc.) and second by including information such as author and date in order to **make sense of the data**. The extractor tries to strike a balance between limiting noise (precision) and including all valid parts (recall). It also has to be **robust and reasonably fast**, it runs in production on millions of documents.
52
52
53
-
This tool can be **useful for quantitative research** in corpus linguistics, natural language processing, computational social science and beyond: it is also relevant to anyone interested in data science, information extraction, text mining, and in scraping-intensive use cases like search engine optimization, business analytics or information security.
53
+
This tool can be **useful for quantitative research** in corpus linguistics, natural language processing, computational social science and beyond: it is relevant to anyone interested in data science, information extraction, text mining, and scraping-intensive use cases like search engine optimization, business analytics or information security.
54
54
55
55
56
56
Features
57
57
~~~~~~~~
58
58
59
-
- Seamless and parallel online/offline processing:
60
-
- Download and conversion utilities included
61
-
- URLs, HTML files or parsed HTML trees as input
59
+
- Web crawling and text discovery:
60
+
- Focused crawling and politeness rules
61
+
- Support for sitemaps (TXT, XML) and feeds (ATOM, JSON, RSS)
62
+
- URL management (blacklists, filtering and de-duplication)
63
+
- Seamless and parallel processing, online and offline:
64
+
- URLs, HTML files or parsed HTML trees usable as input
65
+
- Efficient and polite processing of download queues
66
+
- Conversion of previously downloaded files
62
67
- Robust and efficient extraction:
63
68
- Main text (with LXML, common patterns and generic algorithms: jusText, fork of readability-lxml)
64
69
- Metadata (title, author, date, site name, categories and tags)
Copy file name to clipboardExpand all lines: docs/index.rst
+11-10
Original file line number
Diff line number
Diff line change
@@ -38,19 +38,24 @@ A Python package & command-line tool to gather text on the Web
38
38
Description
39
39
-----------
40
40
41
-
Trafilatura is a **Python package and command-line tool** designed gather text on the Web. It includes discovery, extraction and text processing components. Its main applications are **web crawling, downloads, scraping, and extraction** of main texts, metadata and comments. It aims at staying **handy and modular**: no database is required, the output can be converted to various commonly used formats.
41
+
Trafilatura is a **Python package and command-line tool** designed to gather text on the Web. It includes discovery, extraction and text processing components. Its main applications are **web crawling, downloads, scraping, and extraction** of main texts, metadata and comments. It aims at staying **handy and modular**: no database is required, the output can be converted to various commonly used formats.
42
42
43
43
Going from raw HTML to essential parts can alleviate many problems related to text quality, first by avoiding the **noise caused by recurring elements** (headers, footers, links/blogroll etc.) and second by including information such as author and date in order to **make sense of the data**. The extractor tries to strike a balance between limiting noise (precision) and including all valid parts (recall). It also has to be **robust and reasonably fast**, it runs in production on millions of documents.
44
44
45
-
This tool can be **useful for quantitative research** in corpus linguistics, natural language processing, computational social science and beyond: it is also relevant to anyone interested in data science, information extraction, text mining, and in scraping-intensive use cases like search engine optimization, business analytics or information security.
45
+
This tool can be **useful for quantitative research** in corpus linguistics, natural language processing, computational social science and beyond: it is relevant to anyone interested in data science, information extraction, text mining, and scraping-intensive use cases like search engine optimization, business analytics or information security.
46
46
47
47
48
48
Features
49
49
~~~~~~~~
50
50
51
-
- Seamless and parallel online/offline processing:
52
-
- Download and conversion utilities included
53
-
- URLs, HTML files or parsed HTML trees as input
51
+
- Web crawling and text discovery:
52
+
- Focused crawling and politeness rules
53
+
- Support for sitemaps (TXT, XML) and feeds (ATOM, JSON, RSS)
54
+
- URL management (blacklists, filtering and de-duplication)
55
+
- Seamless and parallel processing, online and offline:
56
+
- URLs, HTML files or parsed HTML trees usable as input
57
+
- Efficient and polite processing of download queues
58
+
- Conversion of previously downloaded files
54
59
- Robust and efficient extraction:
55
60
- Main text (with LXML, common patterns and generic algorithms: jusText, fork of readability-lxml)
56
61
- Metadata (title, author, date, site name, categories and tags)
0 commit comments