Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Stream video example shows switched RGB colors (red <-> blue) #41

Closed
nunoguedelha opened this issue Apr 4, 2022 · 14 comments
Closed

Stream video example shows switched RGB colors (red <-> blue) #41

nunoguedelha opened this issue Apr 4, 2022 · 14 comments
Assignees
Labels

Comments

@nunoguedelha
Copy link
Collaborator

When running the stream video example and streaming the Framegrabber "bouncing" ball or the scrolling "line", we can clearly see that the RGB colors are swapped from RGB to BGR.

By using the bouncing ball mode (option --mode ball) and change the code section https://github.com/robotology/yarp/blob/82bcafc792ba9b32aad771f7f5d6f3c52ba5fbbe/src/devices/fakeFrameGrabber/FakeFrameGrabber.cpp#L434-L442 for drawing 3 RGB circles instead of two

    case VOCAB_BALL:
        {
            if (have_bg) {
                image.copy(background);
            } else {
                image.zero();
            }
            addCircle(image,PixelRgb{255,0,0},bx,by,22);
            addCircle(image,PixelRgb{0,255,0},bx,by,15);
            addCircle(image,PixelRgb{0,0,255},bx,by,8);

we can check the color order used on the receiver side to be BGR:

  1. Run the fakeFrameGrabber
yarpdev --file fakeFrameGrabber_basic.ini --mode ball --name /icubSim/camLeftEye

fakeFrameGrabber_basic.ini:

device fakeFrameGrabber
width 640
height 480
period 10
syncro 1
  1. Connect the the output port to a yarpview device.
  2. Run the Open-MCT visualizer.
image

Originally posted by @nunoguedelha in ami-iit/yarp-openmct#101 (comment)

@nunoguedelha nunoguedelha self-assigned this Apr 4, 2022
@nunoguedelha
Copy link
Collaborator Author

We need to:

  • Figure out if the Color Space order used on the source (RGB) is conserved through the connection up to the destination.
  • Understand how the Color Space is configured on the Canvas API in order to render the proper RGBA values, i.e. respective colors and transparency level (alpha coefficient).

@nunoguedelha
Copy link
Collaborator Author

nunoguedelha commented Apr 5, 2022

Understand how the Color Space is configured on the Canvas API

Part of the canvas API is addressed in #39 (comment). The CanvasRenderingContext2D interface, part of the Canvas API, provides the 2D rendering context for the drawing surface of a element. It is used for drawing shapes, text, images, and other objects.

We can check the original color space of this 2D rendering context by drawing a rectangle ([fillRect](https://developer.mozilla.org/en-US/docs/Web/API/CanvasRenderingContext2D/fillRect)) filling the whole canvas, of a single color, red ([fillStyle](https://developer.mozilla.org/en-US/docs/Web/API/CanvasRenderingContext2D/fillStyle) = "#FF0000"), and then checking the image data (getImageData returns an ImageData object, a one-dimensional array containing the data in the RGBA order, with integer values between 0 and 255).

    function tmp_visual() {
      if(g_img != undefined)
      {
        virtual_img.src = yarp.getImageSrc(g_img.compression_type,g_img.buffer);
        // video_stream_element.getContext('2d').drawImage(virtual_img,0,0,video_stream_element.width,video_stream_element.height);
        video_stream_element.getContext('2d').fillStyle = "#FF0000";
        video_stream_element.getContext('2d').fillRect(0, 0, video_stream_element.width, video_stream_element.height);
        var myImageData = video_stream_element.getContext('2d').getImageData(0, 0, video_stream_element.width, video_stream_element.height);
      }

      setTimeout(tmp_visual,33);
    }

We get an ImageData with the following data:

myImageData = {
    colorSpace: "srgb",
    width: 300,
    height: 150,
    data: {255,0,0,255,  255,0,0,255,  255,0,0,255,  ...}
}

The data field translates [Red,100% alpha]. We use the same approach to check the green and blue color mapping.

@nunoguedelha
Copy link
Collaborator Author

nunoguedelha commented Apr 5, 2022

Check where in the transmission between the source file and final rendering the Color Space is modified

We define three test image files for checking color space changes between the source and destination images. Each test image is a rectangle filling the canvas, either red, green or blue. The test images are encoded in PPM* P6 format, manually edited in P3 (ASCII image data) and then converted to P6 (binary image data) in Gimp (saved as "raw" format).

PPM format

http://paulbourke.net/dataformats/ppm/
https://web.cse.ohio-state.edu/~shen.94/681/Site/ppm_help.html

RGB color chart

https://www.rapidtables.com/web/color/RGB_Color.html

Example - Red rectangle
P3
# 8-bit ppm - RGB
10 10
255
255   0   0 255   0   0 255   0   0 255   0   0 255   0   0 255   0   0 255   0   0 255   0   0 255   0   0 255   0   0
255   0   0 255   0   0 255   0   0 255   0   0 255   0   0 255   0   0 255   0   0 255   0   0 255   0   0 255   0   0
255   0   0 255   0   0 255   0   0 255   0   0 255   0   0 255   0   0 255   0   0 255   0   0 255   0   0 255   0   0
255   0   0 255   0   0 255   0   0 255   0   0 255   0   0 255   0   0 255   0   0 255   0   0 255   0   0 255   0   0
255   0   0 255   0   0 255   0   0 255   0   0 255   0   0 255   0   0 255   0   0 255   0   0 255   0   0 255   0   0
255   0   0 255   0   0 255   0   0 255   0   0 255   0   0 255   0   0 255   0   0 255   0   0 255   0   0 255   0   0
255   0   0 255   0   0 255   0   0 255   0   0 255   0   0 255   0   0 255   0   0 255   0   0 255   0   0 255   0   0
255   0   0 255   0   0 255   0   0 255   0   0 255   0   0 255   0   0 255   0   0 255   0   0 255   0   0 255   0   0
255   0   0 255   0   0 255   0   0 255   0   0 255   0   0 255   0   0 255   0   0 255   0   0 255   0   0 255   0   0
255   0   0 255   0   0 255   0   0 255   0   0 255   0   0 255   0   0 255   0   0 255   0   0 255   0   0 255   0   0

(*) Note

From the Yarp source code, we see that the fakeFrameGrabber only supports PPM and PNG formats. By testing we verified that it supports only PPM P1, P5 and P6 (binary or byte formats) and no PNG.

We get the following source-destination color mapping:

  • Red rectangle -> RGBA data = {255,0,0,255, 0,0,254,255, ...}
  • Green rectangle -> RGBA data = {0,255,0,255, 0,255,0,255, ...}
  • Blue rectangle -> RGBA data = {0,0,255,255, 254,0,0,255, ...}
  • Purple rectangle RGBA data = {200,0,100,255, ...} -> RGBA data = {100,0,200,255, ...}

The change in the values of ±1/255 R, G, or B amplitude is probably due to the compression/decompression. Blue and Red colors are clearly switched.

If we load the purple rectangle file formatted as JPEG directly from the example index.html page, replacing the code

var virtual_img = new Image();
var write_time = document.getElementById('write-time');
function tmp_visual() {
if(g_img != undefined)
{
virtual_img.src = yarp.getImageSrc(g_img.compression_type,g_img.buffer);
video_stream_element.getContext('2d').drawImage(virtual_img,0,0);
}
setTimeout(tmp_visual,33);
}

by

    var virtual_img = new Image();
    virtual_img.src = "/testImage-xxxRectangle.jpg";

    var write_time = document.getElementById('write-time');

    function tmp_visual() {
      if(g_img != undefined)
      {
        video_stream_element.getContext('2d').drawImage(virtual_img,0,0,video_stream_element.width,video_stream_element.height);
      }

      setTimeout(tmp_visual,33);
    }

we obtain an unchanged image on the canvas with the original color order. In this case, the image source URI virtual_img.src = "http://192.168.1.70:3000/testImage-xxxRectangle.jpg". drawImage writes the image on the canvas properly, taking in account the color space embedded in the JPEG image. we can check the color space in GIMP app from the menu "Image->metadata->view metadata":
image

Note on the metadata

We tried to check the Exif metadata directly with the NodeJS package exif-metadata, but it didn't seem to work properly.

Note on other ways of comparing the images

I also tried to compare the image data URI generated through an online 3rd party tool from the saved JPEG file "/testImage-xxxRectangle.jpg" and the data URI generated within yarpjs example cleint (more precisely yarp.getImageSrc()). There were too many differences, probably due to the additional metadata generated by GIMP when saving the original image in JPEG format, the different JPEG compression levels used by GIMP and YARP, all these resulting in different base 64 encoded string. So I dropped the analysis based on the URI data.

https://ezgif.com/image-to-datauri
https://websemantics.uk/tools/image-to-data-uri-converter/
https://www.npmjs.com/package/exif-metadata

@nunoguedelha
Copy link
Collaborator Author

nunoguedelha commented Apr 7, 2022

Check where in the transmission between the source file and final rendering the Color Space is modified [..Continued]

Let us recap the pipeline for rendering the rectangle image in the browser, either through the <fakeFrameGrabber> and <YarpJS Example Server>, either by loading directly the file in the browser (virtual_img.src = "/testImage-xxxRectangle.jpg";).

Through the YarpJS stream video example

Config file fakeFrameGrabber_testImage.ini:

device fakeFrameGrabber
name /icubSim/camLeftEye
mode none
period 10

Run command:

yarpdev --file fakeFrameGrabber_testImage.ini --src testImage-xxxRectangle-P6.ppm

Transmission pipeline:

[testImage-xxxRectangle-P6.ppm] ⟶ <fakeFrameGrabber> ⟶ /icubSim/camLeftEye/yarpjs/img:i ⟶ <yarp.js example server> ⟶ Yarp Socket ⟶ <HTML page>

  • The JPEG data is generated in the <fakeFrameGrabber> device.
  • In the <yarp.js example server> there are actually two steps: (1) reading the data through the Javascript bindings, (2) sending the data via the YarpJS Communicator.

Loading directly the file in the browser

[testImage-xxxRectangle.jpg] ⟶ <HTML page>

  • The JPEG file was generated from the PPM original file using GIMP.

Comparing the Image file data VS image data received on the Yarp socket

We compared in NodeJS command line the image appearance and metadata from the file "testImage-xxxRectangle.jpg" against what we get from the image data received by the client:

  • In Westorm IDE, break in
    virtual_img.src = yarp.getImageSrc(g_img.compression_type,g_img.buffer);
  • In Webstorm console, type the buffer g_img.buffer content and copy it to the clipboard
    (new Uint8Array(g_img.buffer)).toString()
    image
  • In NodeJS command line, run
    > var dataFromFaultyBuffer = new Uint8Array([<clipboard-content>])
    > fs.writeFileSync('<path-to-new-file-testImage-xxxRectangleCopyFromFaultyBuffer.jpg>',dataFromFaultyBuffer)
    

testImage-xxxRectangle.jpg

RGBA color {200,0,100}
Color space read through GIMP: sRGB

testImage-xxxRectangleCopyFromFaultyBuffer.jpg

RGBA color {100,0,200}
Color space read through GIMP: sRGB

Comparing the Image file data VS image data received on the Yarp port /yarpjs/img:i (through the Yarp bindings)

In the yarp.browserCommunicator,

yarp.js/yarp.js

Line 380 in d2e4609

yarp.browserCommunicator = function (_io) {

on a port.onRead() callback, the received data is sent through a socket.

yarp.js/yarp.js

Lines 388 to 391 in d2e4609

if (port != undefined)
{
port.onRead(function (obj) {
io.emit('yarp ' + port_name + ' message',obj.toSend());

The data to send is generated in

yarp.js/yarp.js

Lines 388 to 391 in d2e4609

if (port != undefined)
{
port.onRead(function (obj) {
io.emit('yarp ' + port_name + ' message',obj.toSend());

We intercept there the buffer content. We obtain the same result as before.
RGBA color {100,0,200}
Color space read through GIMP: sRGB

The color swap happens before the data is sent to the browser via socket, and most probably in the Javascript bindings implementation since we get the correct colors when reading the data with yarpview, or reading the image data with yarp read ... /icubSim/camLeftEye. As a side note, the fakeFrameGrabber uses an RGB pixel code (yarp::sig::ImageOf<yarp::sig::PixelRgb>):
https://github.com/robotology/yarp/blob/043326d4f763c3ce9da43bf7399a86c2bfb0c310/src/devices/fakeFrameGrabber/FakeFrameGrabber.cpp#L647

On top of that, when reading the image port with yarp read ... /icubSim/camLeftEye we get the following output:

[mat] [rgb] (3 320 8 10 10) {200 0 100 200 0 100 ...)

Nevertheless, I'm verifying this hypothesis running YARP in debug mode ...

@nunoguedelha
Copy link
Collaborator Author

nunoguedelha commented Apr 8, 2022

Check where in the transmission between the source file and final rendering the Color Space is modified [..On and On]

When reading the Yarp port /icubSim/camLeftEye data with the command

yarp read ... /icubSim/camLeftEye

We get the output

[mat] [rgb] (3 320 8 10 10) {200 0 100 200 0 100 ...)

This is the header [mat] [rgb] (3 320 8 10 10) and raw data (PPM color points) received on the port, read as a bottle. Note that the data sent from the fakeFrameGrabber is not compressed.

When reading the image data from port /yarpjs/img:i and debugging the Yarp libraries execution on Xcode, we couldn't have a clear view of the received buffer data. Maybe it was an issue at the debugger level with the alignment between the printed image and the actual allocated memory. In any case we could check that in the FakeFrameGrabber.cpp code and Image::read() method, the pixel code (color space RGB or BGR etc) of the received image data was as expected: header.id = 6449010, set through setPixelCode(header.id) in Image::read(), and translating to "r", "g", "b".

Focusing now on the analysis of the processing done in YarpJS.node...

@nunoguedelha
Copy link
Collaborator Author

nunoguedelha commented Apr 12, 2022

Check where in the transmission between the source file and final rendering the Color Space is modified [processing done in YarpJS.node and yarp.js main Node.js script]

Refer to #31 fro further details on the setup used for debugging YarpJS.node code on CLion.

Received Data in YarpJS_BufferedPort_Image::_callback_onRead

We run the YarpJS target and break in

YarpJS_BufferedPort_Image::_callback_onRead(std::vector<> &) YarpJS_BufferedPort_Image.cpp:30
YarpJS_Callback::_internal_worker_end(uv_work_s *, int) YarpJS_Callback.h:123

template <class T>
void YarpJS_Callback<T>::_internal_worker_end(uv_work_t *req, int status)
{
if (status == UV_ECANCELED)
return;
YarpJS_Callback<T> *tmp_this = static_cast<YarpJS_Callback<T> *>(req->data);
Nan::HandleScope scope;
std::vector<v8::Local<v8::Value> > tmp_arguments;
(tmp_this->parent->*(tmp_this->prepareCallback))(tmp_arguments);
tmp_this->callback->Call(tmp_arguments.size(),tmp_arguments.data());
tmp_this->mutex_callback.unlock();
}

=> call to tmp_this->prepareCallback

void YarpJS_BufferedPort_Image::_callback_onRead(std::vector<v8::Local<v8::Value> > &tmp_arguments)
{
// create a new YarpJS_Image
v8::Local<v8::Value> argv[1] = {Nan::New(Nan::Null)};
v8::Local<v8::Function> cons = Nan::GetFunction(Nan::New(YarpJS_Image::constructor)).ToLocalChecked();
v8::Local<v8::Object> tmpImgJS = cons->NewInstance(Nan::GetCurrentContext(), 1, argv).ToLocalChecked();
YarpJS_Image *tmpImg = Nan::ObjectWrap::Unwrap<YarpJS_Image>(tmpImgJS);
tmpImg->setYarpObj(new yarp::sig::Image(this->datum));
tmp_arguments.push_back(tmpImgJS);
}

argv = {v8::Local<v8::Value> [1]} 
 [0] = {v8::Local<v8::Value>} 
  val_ = {v8::Value *} 0x1080080b8 
cons = {v8::Local<v8::Function>} 
tmpImgJS = {v8::Local<v8::Object>} 
 val_ = {v8::Object *} 0x105039638 
tmpImg = {YarpJS_Image *} 0x104f1cf80 
 YarpJS_Wrapper<yarp::sig::Image> = {YarpJS_Wrapper<yarp::sig::Image>} 
  YarpJS = {YarpJS} 
  yarpObj = {yarp::sig::Image *} 0x107307e90 
   imgWidth = {size_t} 10
   imgHeight = {size_t} 10
   imgPixelSize = {size_t} 3
   imgRowSize = {size_t} 32
   imgQuantum = {size_t} 8
   imgPixelCode = {int} 6449010
   topIsLow = {bool} true
   data = {char **} 0x1073070e0 
    *data = {char *} 0x107308528 "\xc8"
     **data = {char} -56 '\xc8'

In YarpJS_BufferedPort_Image::_callback_onRead, the data pointed by tmpImg.YarpJS_Wrapper<yarp::sig::Image>.yarpObj->data show:
image

The pixel data 0x c8 00 64 translates to decimal RGB colors 200 0 100 as expected. So the swap happens later on.

Passed data to yarp.js and compression

The call tree followed when the data is passed to the Javascript code is described below:


tmp_this->callback->Call(tmp_arguments.size(),tmp_arguments.data());


yarp.js/yarp.js

Line 188 in d2e4609

cb(_yarp_wrap_object(obj));


yarp.js/yarp.js

Line 391 in d2e4609

io.emit('yarp ' + port_name + ' message',obj.toSend());


return {


obj->compress(compression_quality);

Up to this point, the data is still uncompressed and no color swap occurred:

obj->isCompressed = false
obj->YarpJS_Wrapper<yarp::sig::Image>.yarpObj->data = [0xc8 0x00 0x64 ...]

Since we know that the pixels in the compressed image have the colors swapped, we can conclude that the swap occurs in the compression

obj->compress(compression_quality);

@nunoguedelha
Copy link
Collaborator Author

The analysis in #41 (comment) narrowed down the bug location within the JPEG compression function called in

obj->compress(compression_quality);
.

This is enough to proceed with the performance analysis and improvement in ami-iit/yarp-openmct#104 by trying a new transmission protocol (mjpeg), before actually implementing a fix here, since that issue has more priority and the compression used with the MJPEG protocol shall replace the JPEG compression implemented in yarp.js.

@nunoguedelha
Copy link
Collaborator Author

CC @traversaro

@traversaro
Copy link
Member

Great investigation!

@nunoguedelha
Copy link
Collaborator Author

Actually, the compression function call points to

cv::imencode(encodeString,internalImage, internalBuffer, p);

which is implemented in an OpenCV image encoding/decoding library (imgcodecs.hpp). After quickly hovering over the file I noticed that the assumed color channels are B-G-R, at least in the MacOSX implementation, which would explain the color swap. this is probably tunable through the input argument ``

/** @brief Reads an image from a buffer in memory.

The function imdecode reads an image from the specified buffer in the memory. If the buffer is too short or
contains invalid data, the function returns an empty matrix ( Mat::data==NULL ).

See cv::imread for the list of supported formats and flags description.

@note In the case of color images, the decoded images will have the channels stored in B G R order.
@param buf Input array or vector of bytes.
@param flags The same flags as in cv::imread, see cv::ImreadModes.
*/

/** @brief Encodes an image into a memory buffer.

The function imencode compresses the image and stores it in the memory buffer that is resized to fit the
result. See cv::imwrite for the list of supported formats and flags description.

@param ext File extension that defines the output format.
@param img Image to be written.
@param buf Output buffer resized to fit the compressed image.
@param params Format-specific parameters. See cv::imwrite and cv::ImwriteFlags.
*/
CV_EXPORTS_W bool imencode( const String& ext, InputArray img,
                            CV_OUT std::vector<uchar>& buf,
                            const std::vector<int>& params = std::vector<int>());

@nunoguedelha
Copy link
Collaborator Author

nunoguedelha commented Apr 12, 2022

Convert the images to OpenCV JPEG format

/** @brief Encodes an image into a memory buffer.

The function imencode compresses the image and stores it in the memory buffer that is resized to fit the
result. See cv::imwrite for the list of supported formats and flags description.

@param ext File extension that defines the output format.
@param img Image to be written.
@param buf Output buffer resized to fit the compressed image.
@param params Format-specific parameters. See cv::imwrite and cv::ImwriteFlags.
*/
CV_EXPORTS_W bool imencode( const String& ext, InputArray img,
                            CV_OUT std::vector<uchar>& buf,
                            const std::vector<int>& params = std::vector<int>());
/** @brief Saves an image to a specified file.

The function imwrite saves the image to the specified file. The image format is chosen based on the
filename extension (see cv::imread for the list of extensions). In general, only 8-bit
single-channel or 3-channel (with 'BGR' channel order) images
can be saved using this function, with these exceptions:

- 16-bit unsigned (CV_16U) images can be saved in the case of PNG, JPEG 2000, and TIFF formats
- 32-bit float (CV_32F) images can be saved in PFM, TIFF, OpenEXR, and Radiance HDR formats;
  3-channel (CV_32FC3) TIFF images will be saved using the LogLuv high dynamic range encoding
  (4 bytes per pixel)
- PNG images with an alpha channel can be saved using this function. To do this, create
8-bit (or 16-bit) 4-channel image BGRA, where the alpha channel goes last. Fully transparent pixels
should have alpha set to 0, fully opaque pixels should have alpha set to 255/65535 (see the code sample below).
- Multiple images (vector of Mat) can be saved in TIFF format (see the code sample below).

If the image format is not supported, the image will be converted to 8-bit unsigned (CV_8U) and saved that way.

If the format, depth or channel order is different, use
Mat::convertTo and cv::cvtColor to convert it before saving. Or, use the universal FileStorage I/O
functions to save the image to XML or YAML format.

The sample below shows how to create a BGRA image, how to set custom compression parameters and save it to a PNG file.
It also demonstrates how to save multiple images in a TIFF file:
@include snippets/imgcodecs_imwrite.cpp
@param filename Name of the file.
@param img (Mat or vector of Mat) Image or Images to be saved.
@param params Format-specific parameters encoded as pairs (paramId_1, paramValue_1, paramId_2, paramValue_2, ... .) see cv::ImwriteFlags
*/
CV_EXPORTS_W bool imwrite( const String& filename, InputArray img,
              const std::vector<int>& params = std::vector<int>());
enum ImwriteFlags {
       IMWRITE_JPEG_QUALITY        = 1,  //!< For JPEG, it can be a quality from 0 to 100 (the higher is the better). Default value is 95.
       IMWRITE_JPEG_PROGRESSIVE    = 2,  //!< Enable JPEG features, 0 or 1, default is False.
       IMWRITE_JPEG_OPTIMIZE       = 3,  //!< Enable JPEG features, 0 or 1, default is False.
       IMWRITE_JPEG_RST_INTERVAL   = 4,  //!< JPEG restart interval, 0 - 65535, default is 0 - no restart.
       IMWRITE_JPEG_LUMA_QUALITY   = 5,  //!< Separate luma quality level, 0 - 100, default is 0 - don't use.
       IMWRITE_JPEG_CHROMA_QUALITY = 6,  //!< Separate chroma quality level, 0 - 100, default is 0 - don't use.
       IMWRITE_PNG_COMPRESSION     = 16, //!< For PNG, it can be the compression level from 0 to 9. A higher value means a smaller size and longer compression time. If specified, strategy is changed to IMWRITE_PNG_STRATEGY_DEFAULT (Z_DEFAULT_STRATEGY). Default value is 1 (best speed setting).
       IMWRITE_PNG_STRATEGY        = 17, //!< One of cv::ImwritePNGFlags, default is IMWRITE_PNG_STRATEGY_RLE.
       IMWRITE_PNG_BILEVEL         = 18, //!< Binary level PNG, 0 or 1, default is 0.
       IMWRITE_PXM_BINARY          = 32, //!< For PPM, PGM, or PBM, it can be a binary format flag, 0 or 1. Default value is 1.
       IMWRITE_EXR_TYPE            = (3 << 4) + 0, /* 48 */ //!< override EXR storage type (FLOAT (FP32) is default)
       IMWRITE_EXR_COMPRESSION     = (3 << 4) + 1, /* 49 */ //!< override EXR compression type (ZIP_COMPRESSION = 3 is default)
       IMWRITE_WEBP_QUALITY        = 64, //!< For WEBP, it can be a quality from 1 to 100 (the higher is the better). By default (without any parameter) and for quality above 100 the lossless compression is used.
       IMWRITE_PAM_TUPLETYPE       = 128,//!< For PAM, sets the TUPLETYPE field to the corresponding string value that is defined for the format
       IMWRITE_TIFF_RESUNIT = 256,//!< For TIFF, use to specify which DPI resolution unit to set; see libtiff documentation for valid values
       IMWRITE_TIFF_XDPI = 257,//!< For TIFF, use to specify the X direction DPI
       IMWRITE_TIFF_YDPI = 258, //!< For TIFF, use to specify the Y direction DPI
       IMWRITE_TIFF_COMPRESSION = 259, //!< For TIFF, use to specify the image compression scheme. See libtiff for integer constants corresponding to compression formats. Note, for images whose depth is CV_32F, only libtiff's SGILOG compression scheme is used. For other supported depths, the compression scheme can be specified by this flag; LZW compression is the default.
       IMWRITE_JPEG2000_COMPRESSION_X1000 = 272 //!< For JPEG2000, use to specify the target compression rate (multiplied by 1000). The value can be from 0 to 1000. Default is 1000.
     };

The current parameters p are

p.push_back(cv::IMWRITE_JPEG_QUALITY);
p.push_back(compression_quality);

No additional parameter here is meant to convert the channel order.

If the format, depth or channel order is different, use
Mat::convertTo and cv::cvtColor to convert it before saving. ...

cvtColor(...)

/** @brief Converts an image from one color space to another.

The function converts an input image from one color space to another. In case of a transformation
to-from RGB color space, the order of the channels should be specified explicitly (RGB or BGR). Note
that the default color format in OpenCV is often referred to as RGB but it is actually BGR (the
bytes are reversed). So the first byte in a standard (24-bit) color image will be an 8-bit Blue
component, the second byte will be Green, and the third byte will be Red. The fourth, fifth, and
sixth bytes would then be the second pixel (Blue, then Green, then Red), and so on.

The conventional ranges for R, G, and B channel values are:

  • 0 to 255 for CV_8U images
  • 0 to 65535 for CV_16U images
  • 0 to 1 for CV_32F images

In case of linear transformations, the range does not matter. But in case of a non-linear
transformation, an input RGB image should be normalized to the proper value range to get the correct
results, for example, for RGB $\rightarrow$ L*u*v* transformation. For example, if you have a
32-bit floating-point image directly converted from an 8-bit image without any scaling, then it will
have the 0..255 value range instead of 0..1 assumed by the function. So, before calling #cvtColor ,
you need first to scale the image down:
@code
img *= 1./255;
cvtColor(img, img, COLOR_BGR2Luv);
@Endcode
If you use #cvtColor with 8-bit images, the conversion will have some information lost. For many
applications, this will not be noticeable but it is recommended to use 32-bit images in applications
that need the full range of colors or that convert an image before an operation and then convert
back.

If conversion adds the alpha channel, its value will set to the maximum of corresponding channel
range: 255 for CV_8U, 65535 for CV_16U, 1 for CV_32F.

@param src input image: 8-bit unsigned, 16-bit unsigned ( CV_16UC... ), or single-precision
floating-point.
@param dst output image of the same size and depth as src.
@param code color space conversion code (see #ColorConversionCodes).
@param dstCn number of channels in the destination image; if the parameter is 0, the number of the
channels is derived automatically from src and code.

@see @ref imgproc_color_conversions
*/
CV_EXPORTS_W void cvtColor( InputArray src, OutputArray dst, int code, int dstCn = 0 );

Tried a quick dirty fix replacing

cv::imencode(encodeString,internalImage, internalBuffer, p);

by...

    cv::Mat      internalImageBGR;
    cv::cvtColor(internalImage,internalImageBGR,cv::COLOR_RGB2BGR,0);
    cv::imencode(encodeString,internalImageBGR, internalBuffer, p);

and it's working fine
image

@nunoguedelha
Copy link
Collaborator Author

nunoguedelha commented Apr 13, 2022

As per @S-Dafarra's suggestion, check if we can use directly the automatic conversion from Yarp.

@nunoguedelha
Copy link
Collaborator Author

As per @S-Dafarra's suggestion, check if we can use directly the automatic conversion from Yarp.

Shall be handled in #43 .

@nunoguedelha
Copy link
Collaborator Author

switched RGB colors fixed by #40 .

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants