Skip to content

Commit df4e001

Browse files
authored
Merge pull request #426 from nf-core/dev
1.1.3 Patch release
2 parents 3d4eda2 + 2fc3c02 commit df4e001

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

58 files changed

+1319
-190
lines changed

.github/CONTRIBUTING.md

+3
Original file line numberDiff line numberDiff line change
@@ -27,6 +27,9 @@ If you're not used to this workflow with git, you can start with some [docs from
2727

2828
## Tests
2929

30+
You can optionally test your changes by running the pipeline locally. Then it is recommended to use the `debug` profile to
31+
receive warnings about process selectors and other debug info. Example: `nextflow run . -profile debug,test,docker --outdir <OUTDIR>`.
32+
3033
When you create a pull request with changes, [GitHub Actions](https://github.com/features/actions) will run automatic tests.
3134
Typically, pull-requests are only fully reviewed when these tests are passing, though of course we can help out before then.
3235

.github/PULL_REQUEST_TEMPLATE.md

+1
Original file line numberDiff line numberDiff line change
@@ -19,6 +19,7 @@ Learn more about contributing: [CONTRIBUTING.md](https://github.com/nf-core/taxp
1919
- [ ] If necessary, also make a PR on the nf-core/taxprofiler _branch_ on the [nf-core/test-datasets](https://github.com/nf-core/test-datasets) repository.
2020
- [ ] Make sure your code lints (`nf-core lint`).
2121
- [ ] Ensure the test suite passes (`nextflow run . -profile test,docker --outdir <OUTDIR>`).
22+
- [ ] Check for unexpected warnings in debug mode (`nextflow run . -profile debug,test,docker --outdir <OUTDIR>`).
2223
- [ ] Usage Documentation in `docs/usage.md` is updated.
2324
- [ ] Output Documentation in `docs/output.md` is updated.
2425
- [ ] `CHANGELOG.md` is updated.

.github/workflows/ci.yml

+1-1
Original file line numberDiff line numberDiff line change
@@ -42,7 +42,7 @@ jobs:
4242

4343
steps:
4444
- name: Check out pipeline code
45-
uses: actions/checkout@v3
45+
uses: actions/checkout@v4
4646

4747
- name: Install Nextflow
4848
uses: nf-core/setup-nextflow@v1

.github/workflows/fix-linting.yml

+2-2
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,7 @@ jobs:
1313
runs-on: ubuntu-latest
1414
steps:
1515
# Use the @nf-core-bot token to check out so we can push later
16-
- uses: actions/checkout@v3
16+
- uses: actions/checkout@v4
1717
with:
1818
token: ${{ secrets.nf_core_bot_auth_token }}
1919

@@ -24,7 +24,7 @@ jobs:
2424
env:
2525
GITHUB_TOKEN: ${{ secrets.nf_core_bot_auth_token }}
2626

27-
- uses: actions/setup-node@v3
27+
- uses: actions/setup-node@v4
2828

2929
- name: Install Prettier
3030
run: npm install -g prettier @prettier/plugin-php

.github/workflows/linting.yml

+6-6
Original file line numberDiff line numberDiff line change
@@ -14,9 +14,9 @@ jobs:
1414
EditorConfig:
1515
runs-on: ubuntu-latest
1616
steps:
17-
- uses: actions/checkout@v3
17+
- uses: actions/checkout@v4
1818

19-
- uses: actions/setup-node@v3
19+
- uses: actions/setup-node@v4
2020

2121
- name: Install editorconfig-checker
2222
run: npm install -g editorconfig-checker
@@ -27,9 +27,9 @@ jobs:
2727
Prettier:
2828
runs-on: ubuntu-latest
2929
steps:
30-
- uses: actions/checkout@v3
30+
- uses: actions/checkout@v4
3131

32-
- uses: actions/setup-node@v3
32+
- uses: actions/setup-node@v4
3333

3434
- name: Install Prettier
3535
run: npm install -g prettier
@@ -40,7 +40,7 @@ jobs:
4040
PythonBlack:
4141
runs-on: ubuntu-latest
4242
steps:
43-
- uses: actions/checkout@v3
43+
- uses: actions/checkout@v4
4444

4545
- name: Check code lints with Black
4646
uses: psf/black@stable
@@ -71,7 +71,7 @@ jobs:
7171
runs-on: ubuntu-latest
7272
steps:
7373
- name: Check out pipeline code
74-
uses: actions/checkout@v3
74+
uses: actions/checkout@v4
7575

7676
- name: Install Nextflow
7777
uses: nf-core/setup-nextflow@v1

.gitpod.yml

+3-1
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,9 @@ tasks:
44
command: |
55
pre-commit install --install-hooks
66
nextflow self-update
7-
7+
- name: unset JAVA_TOOL_OPTIONS
8+
command: |
9+
unset JAVA_TOOL_OPTIONS
810
vscode:
911
extensions: # based on nf-core.nf-core-extensionpack
1012
- codezombiech.gitignore # Language support for .gitignore files

CHANGELOG.md

+20
Original file line numberDiff line numberDiff line change
@@ -3,6 +3,26 @@
33
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/)
44
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
55

6+
## v1.1.3 - Augmented Akita Patch [2024-01-12]
7+
8+
### `Added`
9+
10+
- [#424](https://github.com/nf-core/taxprofiler/pull/424) Updated to nf-core pipeline template v2.11.1 (added by @LilyAnderssonLee & @sofstam)
11+
12+
### `Fixed`
13+
14+
- [#419](https://github.com/nf-core/taxprofiler/pull/419) Added improved syntax highlighting for tables in documentation (fix by @mashehu)
15+
- [#421](https://github.com/nf-core/taxprofiler/pull/421) Updated the krakenuniq/preloadedkrakenuniq module that contained a fix for saving the output reads (❤️ to @SannaAb for reporting, fix by @Midnighter)
16+
- [#427](https://github.com/nf-core/taxprofiler/pull/427) Fixed preprint information in the recommended methods text (fix by @jfy133)
17+
18+
### `Dependencies`
19+
20+
| Tool | Previous version | New version |
21+
| ------------- | ---------------- | ----------- |
22+
| multiqc | 1.15 | 1.19 |
23+
| fastqc | 11.9 | 12.1 |
24+
| nf-validation | unpinned | 1.1.3 |
25+
626
## v1.1.2 - Augmented Akita Patch [2023-10-27]
727

828
### `Added`

README.md

+5-10
Original file line numberDiff line numberDiff line change
@@ -47,11 +47,8 @@
4747

4848
## Usage
4949

50-
:::note
51-
If you are new to Nextflow and nf-core, please refer to [this page](https://nf-co.re/docs/usage/installation) on how
52-
to set-up Nextflow. Make sure to [test your setup](https://nf-co.re/docs/usage/introduction#how-to-run-a-pipeline)
53-
with `-profile test` before running the workflow on actual data.
54-
:::
50+
> [!NOTE]
51+
> If you are new to Nextflow and nf-core, please refer to [this page](https://nf-co.re/docs/usage/installation) on how to set-up Nextflow. Make sure to [test your setup](https://nf-co.re/docs/usage/introduction#how-to-run-a-pipeline) with `-profile test` before running the workflow on actual data.
5552
5653
First, prepare a samplesheet with your input data that looks as follows:
5754

@@ -89,11 +86,9 @@ nextflow run nf-core/taxprofiler \
8986
--run_kraken2 --run_metaphlan
9087
```
9188

92-
:::warning
93-
Please provide pipeline parameters via the CLI or Nextflow `-params-file` option. Custom config files including those
94-
provided by the `-c` Nextflow option can be used to provide any configuration _**except for parameters**_;
95-
see [docs](https://nf-co.re/usage/configuration#custom-configuration-files).
96-
:::
89+
> [!WARNING]
90+
> Please provide pipeline parameters via the CLI or Nextflow `-params-file` option. Custom config files including those provided by the `-c` Nextflow option can be used to provide any configuration _**except for parameters**_;
91+
> see [docs](https://nf-co.re/usage/configuration#custom-configuration-files).
9792
9893
For more details and further functionality, please refer to the [usage documentation](https://nf-co.re/taxprofiler/usage) and the [parameter documentation](https://nf-co.re/taxprofiler/parameters).
9994

assets/methods_description_template.yml

+2-3
Original file line numberDiff line numberDiff line change
@@ -3,8 +3,6 @@ description: "Suggested text and references to use when describing pipeline usag
33
section_name: "nf-core/taxprofiler Methods Description"
44
section_href: "https://github.com/nf-core/taxprofiler"
55
plot_type: "html"
6-
## TODO nf-core: Update the HTML below to your preferred methods description, e.g. add publication citation for this pipeline
7-
## You inject any metadata in the Nextflow '${workflow}' object
86
data: |
97
<h4>Methods</h4>
108
<p>Data was processed using nf-core/taxprofiler v${workflow.manifest.version} ${doi_text} of the nf-core collection of workflows (<a href="https://doi.org/10.1038/s41587-020-0439-x">Ewels <em>et al.</em>, 2020</a>), utilising reproducible software environments from the Bioconda (<a href="https://doi.org/10.1038/s41592-018-0046-7">Grüning <em>et al.</em>, 2018</a>) and Biocontainers (<a href="https://doi.org/10.1093/bioinformatics/btx192">da Veiga Leprevost <em>et al.</em>, 2017</a>) projects.</p>
@@ -17,12 +15,13 @@ data: |
1715
<li>Ewels, P. A., Peltzer, A., Fillinger, S., Patel, H., Alneberg, J., Wilm, A., Garcia, M. U., Di Tommaso, P., & Nahnsen, S. (2020). The nf-core framework for community-curated bioinformatics pipelines. Nature Biotechnology, 38(3), 276-278. doi: <a href="https://doi.org/10.1038/s41587-020-0439-x">10.1038/s41587-020-0439-x</a></li>
1816
<li>Grüning, B., Dale, R., Sjödin, A., Chapman, B. A., Rowe, J., Tomkins-Tinch, C. H., Valieris, R., Köster, J., & Bioconda Team. (2018). Bioconda: sustainable and comprehensive software distribution for the life sciences. Nature Methods, 15(7), 475–476. doi: <a href="https://doi.org/10.1038/s41592-018-0046-7">10.1038/s41592-018-0046-7</a></li>
1917
<li>da Veiga Leprevost, F., Grüning, B. A., Alves Aflitos, S., Röst, H. L., Uszkoreit, J., Barsnes, H., Vaudel, M., Moreno, P., Gatto, L., Weber, J., Bai, M., Jimenez, R. C., Sachsenberg, T., Pfeuffer, J., Vera Alvarez, R., Griss, J., Nesvizhskii, A. I., & Perez-Riverol, Y. (2017). BioContainers: an open-source and community-driven framework for software standardization. Bioinformatics (Oxford, England), 33(16), 2580–2582. doi: <a href="https://doi.org/10.1093/bioinformatics/btx192">10.1093/bioinformatics/btx192</a></li>
18+
<li>Stamouli, S., Beber, M. E., Normark, T., Christensen, T. A., Andersson-Li, L., Borry, M., Jamy, M., nf-core community, & Fellows Yates, J. A. (2023). nf-core/taxprofiler: Highly parallelised and flexible pipeline for metagenomic taxonomic classification and profiling. (Preprint). bioRxiv 2023.10.20.563221. doi: <a href="https://doi.org/10.1101/2023.10.20.563221">10.1101/2023.10.20.563221</a></li>
2019
${tool_bibliography}
2120
</ul>
2221
<div class="alert alert-info">
2322
<h5>Notes:</h5>
2423
<ul>
25-
${nodoi_text}
24+
${doi_text}
2625
<li>The command above does not include parameters contained in any configs or profiles that may have been used. Ensure the config file is also uploaded with your publication!</li>
2726
<li>You should also cite all software used within this run. Check the "Software Versions" of this report to get version information.</li>
2827
</ul>

assets/multiqc_config.yml

+2-2
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
report_comment: >
2-
This report has been generated by the <a href="https://github.com/nf-core/taxprofiler/releases/tag/1.1.2" target="_blank">nf-core/taxprofiler</a>
2+
This report has been generated by the <a href="https://github.com/nf-core/taxprofiler/releases/tag/1.1.3" target="_blank">nf-core/taxprofiler</a>
33
analysis pipeline. For information about how to interpret these results, please see the
4-
<a href="https://nf-co.re/taxprofiler/1.1.2/docs/output" target="_blank">documentation</a>.
4+
<a href="https://nf-co.re/taxprofiler/1.1.3/docs/output" target="_blank">documentation</a>.
55
66
report_section_order:
77
"nf-core-taxprofiler-methods-description":

assets/slackreport.json

+1-1
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@
33
{
44
"fallback": "Plain-text summary of the attachment.",
55
"color": "<% if (success) { %>good<% } else { %>danger<%} %>",
6-
"author_name": "nf-core/taxprofiler v${version} - ${runName}",
6+
"author_name": "nf-core/taxprofiler ${version} - ${runName}",
77
"author_icon": "https://www.nextflow.io/docs/latest/_static/favicon.ico",
88
"text": "<% if (success) { %>Pipeline completed successfully!<% } else { %>Pipeline completed with errors<% } %>",
99
"fields": [

conf/modules.config

+2-2
Original file line numberDiff line numberDiff line change
@@ -500,7 +500,7 @@ process {
500500
publishDir = [
501501
path: { "${params.outdir}/krakenuniq/${meta.db_name}/" },
502502
mode: params.publish_dir_mode,
503-
pattern: '*.{txt,fastq.gz}'
503+
pattern: '*.{txt,fasta.gz}'
504504
]
505505
}
506506

@@ -769,7 +769,7 @@ process {
769769
}
770770

771771
withName: 'MULTIQC' {
772-
ext.args = params.multiqc_title ? "--title \"$params.multiqc_title\"" : ''
772+
ext.args = { params.multiqc_title ? "--title \"$params.multiqc_title\"" : '' }
773773
publishDir = [
774774
path: { "${params.outdir}/multiqc" },
775775
mode: params.publish_dir_mode,

docs/output.md

+7-7
Original file line numberDiff line numberDiff line change
@@ -375,23 +375,23 @@ You will only receive the `.fastq` and `*classifiedreads.txt` file if you supply
375375

376376
### KrakenUniq
377377

378-
[KrakenUniq](https://github.com/fbreitwieser/krakenuniq) (formerly KrakenHLL) is an extenson to the fast k-mer-based classification [Kraken](https://github.com/DerrickWood/kraken) with an efficient algorithm for additionally assessing the coverage of unique k-mers found in each species in a dataset.
378+
[KrakenUniq](https://github.com/fbreitwieser/krakenuniq) (formerly KrakenHLL) is an extension to the fast k-mer-based classification performed by [Kraken](https://github.com/DerrickWood/kraken) with an efficient algorithm for additionally assessing the coverage of unique k-mers found in each species in a dataset.
379379

380380
<details markdown="1">
381381
<summary>Output files</summary>
382382

383383
- `krakenuniq/`
384384
- `<db_name>/`
385-
- `<sample_id>_<db_name>.classified.fastq.gz`: FASTQ file containing all reads that had a hit against a reference in the database for a given sample
386-
- `<sample_id>_<db_name>.unclassified.fastq.gz`: FASTQ file containing all reads that did not have a hit in the database for a given sample
387-
- `<sample_id>_<db_name>.report.txt`: A Kraken2-style report that summarises the fraction abundance, taxonomic ID, number of Kmers, taxonomic path of all the hits, with an additional column for k-mer coverage, that allows for more accurate distinguishing between false-positive/true-postitive hits
388-
- `<sample_id>_<db_name>.classifiedreads.txt`: A list of read IDs and the hits each read had against each database for a given sample
385+
- `<sample_id>_<db_name>[.merged].classified.fasta.gz`: Optional FASTA file containing all reads that had a hit against a reference in the database for a given sample. Paired-end input reads are merged in this output.
386+
- `<sample_id>_<db_name>[.merged].unclassified.fasta.gz`: Optional FASTA file containing all reads that did not have a hit in the database for a given sample. Paired-end input reads are merged in this output.
387+
- `<sample_id>_<db_name>.krakenuniq.report.txt`: A Kraken2-style report that summarises the fraction abundance, taxonomic ID, number of Kmers, taxonomic path of all the hits, with an additional column for k-mer coverage, that allows for more accurate distinguishing between false-positive/true-postitive hits.
388+
- `<sample_id>_<db_name>.krakenuniq.classified.txt`: An optional list of read IDs and the hits each read had against each database for a given sample.
389389

390390
</details>
391391

392-
The main taxonomic classification file from KrakenUniq is the `*report.txt` file. This is an extension of the Kraken2 report with the additional k-mer coverage information that provides more information about the accuracy of hits.
392+
The main taxonomic classification file from KrakenUniq is the `*.krakenuniq.report.txt` file. This is an extension of the Kraken2 report with the additional k-mer coverage information that provides more information about the accuracy of hits.
393393

394-
You will only receive the `.fastq` and `*classifiedreads.txt` file if you supply `--krakenuniq_save_reads` and/or `--krakenuniq_save_readclassification` parameters to the pipeline.
394+
You will only receive the `.fasta.gz` and `*.krakenuniq.classified.txt` file if you supply `--krakenuniq_save_reads` and/or `--krakenuniq_save_readclassification` parameters to the pipeline.
395395

396396
:::info
397397
The output system of KrakenUniq can result in other `stdout` or `stderr` logging information being saved in the report file, therefore you must check your report files before downstream use!

docs/usage.md

+3-4
Original file line numberDiff line numberDiff line change
@@ -44,12 +44,11 @@ This samplesheet is then specified on the command line as follows:
4444

4545
The `sample` identifiers have to be the same when you have re-sequenced the same sample more than once e.g. to increase sequencing depth. The pipeline will concatenate different runs FASTQ files of the same sample before performing profiling, when `--perform_runmerging` is supplied. Below is an example for the same sample sequenced across 3 lanes:
4646

47-
```console
47+
```csv title="samplesheet.csv"
4848
sample,run_accession,instrument_platform,fastq_1,fastq_2,fasta
4949
2612,run1,ILLUMINA,2612_run1_R1.fq.gz,,
5050
2612,run2,ILLUMINA,2612_run2_R1.fq.gz,,
5151
2612,run3,ILLUMINA,2612_run3_R1.fq.gz,2612_run3_R2.fq.gz,
52-
5352
```
5453

5554
:::warning
@@ -62,7 +61,7 @@ The pipeline will auto-detect whether a sample is single- or paired-end using th
6261

6362
A final samplesheet file consisting of both single- and paired-end data, as well as long-read FASTA files may look something like the one below. This is for 6 samples, where `2612` has been sequenced twice.
6463

65-
```console
64+
```csv title="samplesheet.csv"
6665
sample,run_accession,instrument_platform,fastq_1,fastq_2,fasta
6766
2611,ERR5766174,ILLUMINA,,,/<path>/<to>/fasta/ERX5474930_ERR5766174_1.fa.gz
6867
2612,ERR5766176,ILLUMINA,/<path>/<to>/fastq/ERX5474932_ERR5766176_1.fastq.gz,/<path>/<to>/fastq/ERX5474932_ERR5766176_2.fastq.gz,
@@ -110,7 +109,7 @@ An example database sheet can look as follows, where 7 tools are being used, and
110109

111110
`kraken2` will be run twice even though only having a single 'dedicated' database because specifying `bracken` implies first running `kraken2` on the `bracken` database, as required by `bracken`.
112111

113-
```console
112+
```csv
114113
tool,db_name,db_params,db_path
115114
malt,malt85,-id 85,/<path>/<to>/malt/testdb-malt/
116115
malt,malt95,-id 90,/<path>/<to>/malt/testdb-malt.tar.gz

lib/NfcoreTemplate.groovy

+18-14
Original file line numberDiff line numberDiff line change
@@ -4,6 +4,7 @@
44

55
import org.yaml.snakeyaml.Yaml
66
import groovy.json.JsonOutput
7+
import nextflow.extension.FilesEx
78

89
class NfcoreTemplate {
910

@@ -141,12 +142,14 @@ class NfcoreTemplate {
141142
try {
142143
if (params.plaintext_email) { throw GroovyException('Send plaintext e-mail, not HTML') }
143144
// Try to send HTML e-mail using sendmail
145+
def sendmail_tf = new File(workflow.launchDir.toString(), ".sendmail_tmp.html")
146+
sendmail_tf.withWriter { w -> w << sendmail_html }
144147
[ 'sendmail', '-t' ].execute() << sendmail_html
145148
log.info "-${colors.purple}[$workflow.manifest.name]${colors.green} Sent summary e-mail to $email_address (sendmail)-"
146149
} catch (all) {
147150
// Catch failures and try with plaintext
148151
def mail_cmd = [ 'mail', '-s', subject, '--content-type=text/html', email_address ]
149-
if ( mqc_report.size() <= max_multiqc_email_size.toBytes() ) {
152+
if ( mqc_report != null && mqc_report.size() <= max_multiqc_email_size.toBytes() ) {
150153
mail_cmd += [ '-A', mqc_report ]
151154
}
152155
mail_cmd.execute() << email_html
@@ -155,14 +158,16 @@ class NfcoreTemplate {
155158
}
156159

157160
// Write summary e-mail HTML to a file
158-
def output_d = new File("${params.outdir}/pipeline_info/")
159-
if (!output_d.exists()) {
160-
output_d.mkdirs()
161-
}
162-
def output_hf = new File(output_d, "pipeline_report.html")
161+
def output_hf = new File(workflow.launchDir.toString(), ".pipeline_report.html")
163162
output_hf.withWriter { w -> w << email_html }
164-
def output_tf = new File(output_d, "pipeline_report.txt")
163+
FilesEx.copyTo(output_hf.toPath(), "${params.outdir}/pipeline_info/pipeline_report.html");
164+
output_hf.delete()
165+
166+
// Write summary e-mail TXT to a file
167+
def output_tf = new File(workflow.launchDir.toString(), ".pipeline_report.txt")
165168
output_tf.withWriter { w -> w << email_txt }
169+
FilesEx.copyTo(output_tf.toPath(), "${params.outdir}/pipeline_info/pipeline_report.txt");
170+
output_tf.delete()
166171
}
167172

168173
//
@@ -227,15 +232,14 @@ class NfcoreTemplate {
227232
// Dump pipeline parameters in a json file
228233
//
229234
public static void dump_parameters(workflow, params) {
230-
def output_d = new File("${params.outdir}/pipeline_info/")
231-
if (!output_d.exists()) {
232-
output_d.mkdirs()
233-
}
234-
235235
def timestamp = new java.util.Date().format( 'yyyy-MM-dd_HH-mm-ss')
236-
def output_pf = new File(output_d, "params_${timestamp}.json")
236+
def filename = "params_${timestamp}.json"
237+
def temp_pf = new File(workflow.launchDir.toString(), ".${filename}")
237238
def jsonStr = JsonOutput.toJson(params)
238-
output_pf.text = JsonOutput.prettyPrint(jsonStr)
239+
temp_pf.text = JsonOutput.prettyPrint(jsonStr)
240+
241+
FilesEx.copyTo(temp_pf.toPath(), "${params.outdir}/pipeline_info/params_${timestamp}.json")
242+
temp_pf.delete()
239243
}
240244

241245
//

0 commit comments

Comments
 (0)