Skip to content

Commit dc84763

Browse files
committed
docs: description syntax
1 parent 81f81e6 commit dc84763

File tree

2 files changed

+22
-20
lines changed

2 files changed

+22
-20
lines changed

README.rst

+11-10
Original file line numberDiff line numberDiff line change
@@ -46,19 +46,24 @@ A Python package & command-line tool to gather text on the Web
4646
Description
4747
-----------
4848

49-
Trafilatura is a **Python package and command-line tool** designed gather text on the Web. It includes discovery, extraction and text processing components. Its main applications are **web crawling, downloads, scraping, and extraction** of main texts, comments and metadata. It aims at staying **handy and modular**: no database is required, the output can be converted to various commonly used formats.
49+
Trafilatura is a **Python package and command-line tool** designed to gather text on the Web. It includes discovery, extraction and text processing components. Its main applications are **web crawling, downloads, scraping, and extraction** of main texts, metadata and comments. It aims at staying **handy and modular**: no database is required, the output can be converted to various commonly used formats.
5050

5151
Going from raw HTML to essential parts can alleviate many problems related to text quality, first by avoiding the **noise caused by recurring elements** (headers, footers, links/blogroll etc.) and second by including information such as author and date in order to **make sense of the data**. The extractor tries to strike a balance between limiting noise (precision) and including all valid parts (recall). It also has to be **robust and reasonably fast**, it runs in production on millions of documents.
5252

53-
This tool can be **useful for quantitative research** in corpus linguistics, natural language processing, computational social science and beyond: it is also relevant to anyone interested in data science, information extraction, text mining, and in scraping-intensive use cases like search engine optimization, business analytics or information security.
53+
This tool can be **useful for quantitative research** in corpus linguistics, natural language processing, computational social science and beyond: it is relevant to anyone interested in data science, information extraction, text mining, and scraping-intensive use cases like search engine optimization, business analytics or information security.
5454

5555

5656
Features
5757
~~~~~~~~
5858

59-
- Seamless and parallel online/offline processing:
60-
- Download and conversion utilities included
61-
- URLs, HTML files or parsed HTML trees as input
59+
- Web crawling and text discovery:
60+
- Focused crawling and politeness rules
61+
- Support for sitemaps (TXT, XML) and feeds (ATOM, JSON, RSS)
62+
- URL management (blacklists, filtering and de-duplication)
63+
- Seamless and parallel processing, online and offline:
64+
- URLs, HTML files or parsed HTML trees usable as input
65+
- Efficient and polite processing of download queues
66+
- Conversion of previously downloaded files
6267
- Robust and efficient extraction:
6368
- Main text (with LXML, common patterns and generic algorithms: jusText, fork of readability-lxml)
6469
- Metadata (title, author, date, site name, categories and tags)
@@ -69,14 +74,10 @@ Features
6974
- CSV (with metadata, `tab-separated values <https://en.wikipedia.org/wiki/Tab-separated_values>`_)
7075
- JSON (with metadata)
7176
- XML (with metadata, text formatting and page structure) and `TEI-XML <https://tei-c.org/>`_
72-
- Link discovery and URL management:
73-
- Focused crawling and politeness rules
74-
- Support for sitemaps (TXT, XML) and feeds (ATOM, JSON, RSS)
75-
- Efficient and polite processing of URL queues
76-
- Blacklisting
7777
- Optional add-ons:
7878
- Language detection on extracted content
7979
- Graphical user interface (GUI)
80+
- Speed optimizations
8081

8182

8283
Evaluation and alternatives

docs/index.rst

+11-10
Original file line numberDiff line numberDiff line change
@@ -38,19 +38,24 @@ A Python package & command-line tool to gather text on the Web
3838
Description
3939
-----------
4040

41-
Trafilatura is a **Python package and command-line tool** designed gather text on the Web. It includes discovery, extraction and text processing components. Its main applications are **web crawling, downloads, scraping, and extraction** of main texts, metadata and comments. It aims at staying **handy and modular**: no database is required, the output can be converted to various commonly used formats.
41+
Trafilatura is a **Python package and command-line tool** designed to gather text on the Web. It includes discovery, extraction and text processing components. Its main applications are **web crawling, downloads, scraping, and extraction** of main texts, metadata and comments. It aims at staying **handy and modular**: no database is required, the output can be converted to various commonly used formats.
4242

4343
Going from raw HTML to essential parts can alleviate many problems related to text quality, first by avoiding the **noise caused by recurring elements** (headers, footers, links/blogroll etc.) and second by including information such as author and date in order to **make sense of the data**. The extractor tries to strike a balance between limiting noise (precision) and including all valid parts (recall). It also has to be **robust and reasonably fast**, it runs in production on millions of documents.
4444

45-
This tool can be **useful for quantitative research** in corpus linguistics, natural language processing, computational social science and beyond: it is also relevant to anyone interested in data science, information extraction, text mining, and in scraping-intensive use cases like search engine optimization, business analytics or information security.
45+
This tool can be **useful for quantitative research** in corpus linguistics, natural language processing, computational social science and beyond: it is relevant to anyone interested in data science, information extraction, text mining, and scraping-intensive use cases like search engine optimization, business analytics or information security.
4646

4747

4848
Features
4949
~~~~~~~~
5050

51-
- Seamless and parallel online/offline processing:
52-
- Download and conversion utilities included
53-
- URLs, HTML files or parsed HTML trees as input
51+
- Web crawling and text discovery:
52+
- Focused crawling and politeness rules
53+
- Support for sitemaps (TXT, XML) and feeds (ATOM, JSON, RSS)
54+
- URL management (blacklists, filtering and de-duplication)
55+
- Seamless and parallel processing, online and offline:
56+
- URLs, HTML files or parsed HTML trees usable as input
57+
- Efficient and polite processing of download queues
58+
- Conversion of previously downloaded files
5459
- Robust and efficient extraction:
5560
- Main text (with LXML, common patterns and generic algorithms: jusText, fork of readability-lxml)
5661
- Metadata (title, author, date, site name, categories and tags)
@@ -61,14 +66,10 @@ Features
6166
- CSV (with metadata, `tab-separated values <https://en.wikipedia.org/wiki/Tab-separated_values>`_)
6267
- JSON (with metadata)
6368
- XML (with metadata, text formatting and page structure) and `TEI-XML <https://tei-c.org/>`_
64-
- Link discovery and URL management:
65-
- Focused crawling and politeness rules
66-
- Support for sitemaps (TXT, XML) and feeds (ATOM, JSON, RSS)
67-
- Efficient and polite processing of URL queues
68-
- Blacklisting
6969
- Optional add-ons:
7070
- Language detection on extracted content
7171
- Graphical user interface (GUI)
72+
- Speed optimizations
7273

7374

7475
Evaluation and alternatives

0 commit comments

Comments
 (0)