Skip to content

Provide pre-pipeline function to modify options based on input image metadata #236

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
lovell opened this issue Jun 23, 2015 · 16 comments
Open
Milestone

Comments

@lovell
Copy link
Owner

lovell commented Jun 23, 2015

This could seriously increase the power of Stream-based image processing.

Here's an example of how halving an image's dimensions could work:

// PROPOSED API NOT YET AVAILABLE
var halver = sharp().before(function(metadata) {
  this.resize(metadata.width / 2, metadata.height / 2);
});
readableStream.pipe(halver).pipe(writableStream);
// PROPOSED API NOT YET AVAILABLE

The existing metadata logic can also be improved to require only the first few hundred bytes of a Stream.

In addition, adding a "playbook" of example uses for this feature to the docs would be great too. There's a variable/percentage extract/crop under discussion at #205 that this should allow for.

@Lanfei
Copy link

Lanfei commented Jan 21, 2016

Is the before function not been supported yet?

@lovell
Copy link
Owner Author

lovell commented Jan 21, 2016

@Lanfei Not yet, sorry, but please do subscribe to this issue for updates.

@Lanfei
Copy link

Lanfei commented Jan 21, 2016

Okay.

@jaredscheib
Copy link

HI @lovell, I think this Issue addresses what I'm trying to solve currently as well – I've subscribed for notifications for when .before is implemented.

In the meantime, might you have a suggestion for how I can verify that a stream contains valid image data before piping it into a transform? I'm using request to pipe data from various URLs that may or may not point at valid images.

I tried piping into a sharp().metadata() instance, handling any errors, and then continue to pipe the stream from inside the success callback (which would seem to imply that it successfully read metadata from a valid image), but I get the error: You cannot pipe after data has been emitted from the response. I understand why this is happening but haven't come up with a solution to pause the stream and then resume from inside the callback, for instance.

@lovell
Copy link
Owner Author

lovell commented Mar 5, 2016

@jaredscheib This feature (possibly with #298) should allow you to achieve what you need, yes. In the meantime, the least complex (and race-condition free) method of doing so is probably to store the streamed data in a Buffer.

@elliotfleming
Copy link

What is the status of this feature?

@lovell
Copy link
Owner Author

lovell commented Sep 8, 2016

@elliotfleming This is yet to be implemented. As always I'm happy to provide guidance to anyone interested in tackling it.

@niftylettuce
Copy link

It'd be really great to have this - I'd like to check image dimensions and throw error if they don't meet criteria before transforming my stream.

@max-degterev
Copy link

It would be very nice to have either this or be able to change image scale, specifically some method that would work like: .scale(2) to double the image dimensions.

@gwen1230
Copy link

Any news for this feature ?

@justinmchase
Copy link

justinmchase commented Nov 7, 2019

Does the sharp.metadata() function read the entire stream into memory or will a followup resize operation currently result in a re-read of the entire stream?

I am currently using something like:

const image = stream.pipe(sharp())
const { width, height } = await image.metadata()
upload(image.resize(...))

EDIT: I see its waiting for the entire file to be read into memory before calculating the metadata... which should not even be allowed for streams tbh.

Could this be solved by having a second function called .dimensions() which only returns the dimensions instead of all of the metadata?

@teoxoy
Copy link

teoxoy commented Dec 13, 2019

It will be nice to have this feature but I'm using image-size-stream as an alternative for now.

Example:

const fileStream = fs.createReadStream(path)
const sizeStream = new ImageDimensionsStream()
const bufferStream = new PassThrough().pause()

let stream = fileStream.pipe(sizeStream).pipe(bufferStream)

sizeStream.on('dimensions', ({ width, height }) => {
    // take decision based on image size
    if (W !== 0 && H !== 0 && (X !== 0 || Y !== 0 || width !== W || height !== H)) {
        const extractStream = sharp()
            .extract({ left: X, top: Y, width: W, height: H })
        stream = stream.pipe(extractStream)
    }
    bufferStream.resume()
    res.send(stream)
})

Using a paused PassThrough stream as buffer while ImageDimensionsStream returns the image dimensions works out nicely.

@omarish

This comment was marked as off-topic.

@xkguq007

This comment was marked as off-topic.

@ThaDaVos
Copy link

ThaDaVos commented Aug 3, 2022

Any progress on this? I am looking to do the same but than for when an image is being contained and replace the black bars (if there will appear any) with a blurred and stretched image to create a customer requested effect - but as there's no before and I am using Streams because I am working with S3 (it eases it a lot) - something like before could be really handy for this (haven't found a solution yet to get the metadata and process the image)

@lazarljubenovic
Copy link

I'm not sure I understand the semantics for the proposed before method. Before what? Also its reliance on this would make it impossible or difficult to use the fat arrow syntax.

I'd expect just being able to return the usual operation parameter via a function that gets called with the metadata, considering that (almost?) all operations have a signature that takes only a single parameter.

sharp('image.png')
  .resize(({ width }) => width / 2)

It would also be backward compatible, since no operator currently accepts a function.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests