Film negative stable multipliers (#5485)

* Added new feature to calculate channel multipliers from crop area. This also saves the crop area channel medians to the processing profiles, in order to get a more consistent color balance when batch-applying the same profile to multiple pictures from the same film roll.

* Fixed wrong initialization of array, and missing check for the result of `getFilmNegativeMedians()`.
Moved `ImProcCoordinator::translateCoord()` from private member to anonymous namespace.
Fixed some whitespace and formatting issues.

* Fixed some formatting issues

* Passed `ipf` parameter as const in `translateCoord`.
Narrowed `using namespace` to single class `Coord2D`.

* Added `scaleGain` entry to thumbnail metadata file, to make `scale_mul` multipliers available in thumbnail processing phase. This simplifies multiplier calculations, so that "faking" thumbnail multipliers in the main image processing is not necessary anymore. This way, output values are immune to slight variations of multipliers between successive shots taken with in-camera AWB turned on.
Shifted multipliers so that the output channel medians are balanced when "Camera WB" is selected. This way, just computing multipliers from crop and setting "Camera WB" (which is the default) gives a pretty well balanced image as a starting point.

* New channel scaling method, based on a film base color sample instead of crop area channel medians. Channels are scaled so that the converted film base values are equal to 1/512th of the output range (65k). This giver better black levels in the output, and more consistency when batch-processing multiple negatives.
The output is now compensated for a known fixed WB value, so that the film base will appear grey when WB is set to 3500K, Green=1.
Added PPVERSION 347 to preserve backwards compatibility: when a processing profile saved by RT 5.7 is loaded (PPVERSION=346), the new fields are initialized to the special value -1, which will instruct the main processing to follow the old channel scaling behaviour. The old channel scaling multipliers will then be converted to the new film base values so that the resulting image is the same, and the fields will be overwritten as soon as the PP is saved again. This will transparently upgrade the processing profile.
When the new behaviour is used, but the film base values are still unset, they are estimated based on channel medians, excluding a 20% border around the image. This should give a better result out-of-the-box for pictures containing a large film holder.

* Code cleanup from review

* Run astyle on film neg source files

* Fixed automated build failure caused by incompatible libraries on my dev PC.

* Simplified `Thumbnail::processFilmNegative` method signature. There is no need to pass in `rmi`,`gmi`,`bmi` multipliers from the caller, i can do the same with my own internal multipliers.

* Added `FilmNegListener` class to pass estimeted film base values to the GUI after first processing. Removed old `filmBaseValues` instance variable from RawImageSource.

* Code cleanup

* Forgot to set baseValues flag in `PartialPasteDlg::applyPaste`
Fixed `filmBaseValuesLabel` not updating when reading zero baseValues. Normally not needed (the label is updated later by the listener), but when the user is browsing through pictures the listener won't fire, so the label must be updated to show values are unset.

* Overwritten channel scaling multipliers by calling `get_colorsCoeff` with `forceAutoWB=false`.
Initially, in `RawImageSource::load`, channels are auto-balanced by averaging the whole picture when computing multipliers.
This can give different multipliers for multiple shots of the same camera, which will lead to inconsistent conversions when batch-processing multiple negatives.
This commit re-sets `scale_mul`, `ref_pre_mul`, etc., in order to "undo" the auto-WB and use the normal camera multipliers.

* Found an easier way to get stable overall multipliers, removed the (horrible) on-the-fly mutation of scaling instance variables.
This commit is contained in:
rom9
2020-04-13 17:20:56 +02:00
committed by GitHub
parent 996ba0a9a3
commit 22eee9787e
21 changed files with 705 additions and 191 deletions

View File

@@ -35,6 +35,9 @@
namespace
{
using rtengine::ST_BAYER;
using rtengine::ST_FUJI_XTRANS;
using rtengine::settings;
bool channelsAvg(
const rtengine::RawImage* ri,
@@ -43,17 +46,16 @@ bool channelsAvg(
const float* cblacksom,
rtengine::Coord spotPos,
int spotSize,
const rtengine::procparams::FilmNegativeParams& params,
std::array<float, 3>& avgs
)
{
avgs = {}; // Channel averages
if (ri->getSensorType() != rtengine::ST_BAYER && ri->getSensorType() != rtengine::ST_FUJI_XTRANS) {
if (ri->getSensorType() != ST_BAYER && ri->getSensorType() != ST_FUJI_XTRANS) {
return false;
}
if (rtengine::settings->verbose) {
if (settings->verbose) {
printf("Spot coord: x=%d y=%d\n", spotPos.x, spotPos.y);
}
@@ -69,9 +71,10 @@ bool channelsAvg(
}
std::array<int, 3> pxCount = {}; // Per-channel sample counts
for (int c = x1; c < x2; ++c) {
for (int r = y1; r < y2; ++r) {
const int ch = ri->getSensorType() == rtengine::ST_BAYER ? ri->FC(r,c) : ri->XTRANSFC(r,c);
const int ch = ri->getSensorType() == ST_BAYER ? ri->FC(r, c) : ri->XTRANSFC(r, c);
++pxCount[ch];
@@ -88,6 +91,112 @@ bool channelsAvg(
return true;
}
void calcMedians(
const rtengine::RawImage* ri,
float** data,
int x1, int y1, int x2, int y2,
std::array<float, 3>& meds
)
{
MyTime t1, t2, t3;
t1.set();
// Channel vectors to calculate medians
std::array<std::vector<float>, 3> cvs;
// Sample one every 5 pixels, and push the value in the appropriate channel vector.
// Choose an odd step, not a multiple of the CFA size, to get a chance to visit each channel.
if (ri->getSensorType() == ST_BAYER) {
for (int row = y1; row < y2; row += 5) {
const int c0 = ri->FC(row, x1 + 0);
const int c1 = ri->FC(row, x1 + 5);
int col = x1;
for (; col < x2 - 5; col += 10) {
cvs[c0].push_back(data[row][col]);
cvs[c1].push_back(data[row][col + 5]);
}
if (col < x2) {
cvs[c0].push_back(data[row][col]);
}
}
} else if (ri->getSensorType() == ST_FUJI_XTRANS) {
for (int row = y1; row < y2; row += 5) {
const std::array<unsigned int, 6> cs = {
ri->XTRANSFC(row, x1 + 0),
ri->XTRANSFC(row, x1 + 5),
ri->XTRANSFC(row, x1 + 10),
ri->XTRANSFC(row, x1 + 15),
ri->XTRANSFC(row, x1 + 20),
ri->XTRANSFC(row, x1 + 25)
};
int col = x1;
for (; col < x2 - 25; col += 30) {
for (int c = 0; c < 6; ++c) {
cvs[cs[c]].push_back(data[row][col + c * 5]);
}
}
for (int c = 0; col < x2; col += 5, ++c) {
cvs[cs[c]].push_back(data[row][col]);
}
}
}
t2.set();
if (settings->verbose) {
printf("Median vector fill loop time us: %d\n", t2.etime(t1));
}
t2.set();
for (int c = 0; c < 3; ++c) {
// Find median values for each channel
if (!cvs[c].empty()) {
rtengine::findMinMaxPercentile(cvs[c].data(), cvs[c].size(), 0.5f, meds[c], 0.5f, meds[c], true);
}
}
t3.set();
if (settings->verbose) {
printf("Sample count: R=%zu, G=%zu, B=%zu\n", cvs[0].size(), cvs[1].size(), cvs[2].size());
printf("Median calc time us: %d\n", t3.etime(t2));
}
}
std::array<double, 3> calcWBMults(
const rtengine::ColorTemp& wb,
const rtengine::ImageMatrices& imatrices,
const rtengine::RawImage *ri,
const float ref_pre_mul[4])
{
std::array<double, 3> wb_mul;
double r, g, b;
wb.getMultipliers(r, g, b);
wb_mul[0] = imatrices.cam_rgb[0][0] * r + imatrices.cam_rgb[0][1] * g + imatrices.cam_rgb[0][2] * b;
wb_mul[1] = imatrices.cam_rgb[1][0] * r + imatrices.cam_rgb[1][1] * g + imatrices.cam_rgb[1][2] * b;
wb_mul[2] = imatrices.cam_rgb[2][0] * r + imatrices.cam_rgb[2][1] * g + imatrices.cam_rgb[2][2] * b;
for (int c = 0; c < 3; ++c) {
wb_mul[c] = ri->get_pre_mul(c) / wb_mul[c] / ref_pre_mul[c];
}
// Normalize max channel gain to 1.0
float mg = rtengine::max(wb_mul[0], wb_mul[1], wb_mul[2]);
for (int c = 0; c < 3; ++c) {
wb_mul[c] /= mg;
}
return wb_mul;
}
}
bool rtengine::RawImageSource::getFilmNegativeExponents(Coord2D spotA, Coord2D spotB, int tran, const procparams::FilmNegativeParams &currentParams, std::array<float, 3>& newExps)
@@ -104,21 +213,29 @@ bool rtengine::RawImageSource::getFilmNegativeExponents(Coord2D spotA, Coord2D s
std::array<float, 3> clearVals;
std::array<float, 3> denseVals;
// Get channel averages in the two spots, sampling from the original ri->data buffer.
// NOTE: rawData values might be affected by CA corection, FlatField, etc, so:
// rawData[y][x] == (ri->data[y][x] - cblacksom[c]) * scale_mul[c]
// is not always true. To calculate exponents on the exact values, we should keep
// a copy of the rawData buffer after preprocessing. Worth the memory waste?
// Sample first spot
transformPosition(spotA.x, spotA.y, tran, spot.x, spot.y);
if (!channelsAvg(ri, W, H, cblacksom, spot, spotSize, currentParams, clearVals)) {
if (!channelsAvg(ri, W, H, cblacksom, spot, spotSize, clearVals)) {
return false;
}
// Sample second spot
transformPosition(spotB.x, spotB.y, tran, spot.x, spot.y);
if (!channelsAvg(ri, W, H, cblacksom, spot, spotSize, currentParams, denseVals)) {
if (!channelsAvg(ri, W, H, cblacksom, spot, spotSize, denseVals)) {
return false;
}
// Detect which one is the dense spot, based on green channel
if (clearVals[1] < denseVals[1]) {
std::swap(clearVals, denseVals);
std::swap(clearVals, denseVals);
}
if (settings->verbose) {
@@ -152,7 +269,32 @@ bool rtengine::RawImageSource::getFilmNegativeExponents(Coord2D spotA, Coord2D s
return true;
}
void rtengine::RawImageSource::filmNegativeProcess(const procparams::FilmNegativeParams &params)
bool rtengine::RawImageSource::getRawSpotValues(Coord2D spotCoord, int spotSize, int tran, const procparams::FilmNegativeParams &params, std::array<float, 3>& rawValues)
{
Coord spot;
transformPosition(spotCoord.x, spotCoord.y, tran, spot.x, spot.y);
if (settings->verbose) {
printf("Transformed coords: %d,%d\n", spot.x, spot.y);
}
if (spotSize < 4) {
return false;
}
// Calculate averages of raw unscaled channels
if (!channelsAvg(ri, W, H, cblacksom, spot, spotSize, rawValues)) {
return false;
}
if (settings->verbose) {
printf("Raw spot values: R=%g, G=%g, B=%g\n", rawValues[0], rawValues[1], rawValues[2]);
}
return true;
}
void rtengine::RawImageSource::filmNegativeProcess(const procparams::FilmNegativeParams &params, std::array<float, 3>& filmBaseValues)
{
// BENCHFUNMICRO
@@ -168,92 +310,114 @@ void rtengine::RawImageSource::filmNegativeProcess(const procparams::FilmNegativ
static_cast<float>(-params.blueRatio * params.greenExp)
};
MyTime t1, t2, t3,t4, t5;
t1.set();
// Channel vectors to calculate medians
std::array<std::vector<float>, 3> cvs;
// Sample one every 5 pixels, and push the value in the appropriate channel vector.
// Choose an odd step, not a multiple of the CFA size, to get a chance to visit each channel.
if (ri->getSensorType() == ST_BAYER) {
for (int row = 0; row < H; row += 5) {
const int c0 = ri->FC(row, 0);
const int c1 = ri->FC(row, 5);
int col = 0;
for (; col < W - 5; col += 10) {
cvs[c0].push_back(rawData[row][col]);
cvs[c1].push_back(rawData[row][col + 5]);
}
if (col < W) {
cvs[c0].push_back(rawData[row][col]);
}
}
}
else if (ri->getSensorType() == ST_FUJI_XTRANS) {
for (int row = 0; row < H; row += 5) {
const std::array<unsigned int, 6> cs = {
ri->XTRANSFC(row, 0),
ri->XTRANSFC(row, 5),
ri->XTRANSFC(row, 10),
ri->XTRANSFC(row, 15),
ri->XTRANSFC(row, 20),
ri->XTRANSFC(row, 25)
};
int col = 0;
for (; col < W - 25; col += 30) {
for (int c = 0; c < 6; ++c) {
cvs[cs[c]].push_back(rawData[row][col + c * 5]);
}
}
for (int c = 0; col < W; col += 5, ++c) {
cvs[cs[c]].push_back(rawData[row][col]);
}
}
}
constexpr float MAX_OUT_VALUE = 65000.f;
t2.set();
// Get multipliers for a known, fixed WB setting, that will be the starting point
// for balancing the converted image.
const std::array<double, 3> wb_mul = calcWBMults(
ColorTemp(3500., 1., 1., "Custom"), imatrices, ri, ref_pre_mul);
if (settings->verbose) {
printf("Median vector fill loop time us: %d\n", t2.etime(t1));
if (rtengine::settings->verbose) {
printf("Fixed WB mults: %g %g %g\n", wb_mul[0], wb_mul[1], wb_mul[2]);
}
t2.set();
// Compensation factor to restore the non-autoWB initialGain (see RawImageSource::load)
const float autoGainComp = camInitialGain / initialGain;
std::array<float, 3> medians; // Channel median values
std::array<float, 3> mults = {
1.f,
1.f,
1.f
}; // Channel normalization multipliers
std::array<float, 3> mults; // Channel normalization multipliers
// If film base values are set in params, use those
if (filmBaseValues[0] <= 0.f) {
// ...otherwise, the film negative tool might have just been enabled on this image,
// whithout any previous setting. So, estimate film base values from channel medians
std::array<float, 3> medians;
// Special value for backwards compatibility with profiles saved by RT 5.7
const bool oldChannelScaling = filmBaseValues[0] == -1.f;
// If using the old channel scaling method, get medians from the whole current image,
// reading values from the already-scaled rawData buffer.
if (oldChannelScaling) {
calcMedians(ri, rawData, 0, 0, W, H, medians);
} else {
// Cut 20% border from medians calculation. It will probably contain outlier values
// from the film holder, which will bias the median result.
const int bW = W * 20 / 100;
const int bH = H * 20 / 100;
calcMedians(ri, rawData, bW, bH, W - bW, H - bH, medians);
}
// Un-scale rawData medians
for (int c = 0; c < 3; ++c) {
medians[c] /= scale_mul[c];
}
if (settings->verbose) {
printf("Channel medians: R=%g, G=%g, B=%g\n", medians[0], medians[1], medians[2]);
}
for (int c = 0; c < 3; ++c) {
// Estimate film base values, so that in the output data, each channel
// median will correspond to 1/24th of MAX.
filmBaseValues[c] = pow_F(24.f / 512.f, 1.f / exps[c]) * medians[c];
if (oldChannelScaling) {
// If using the old channel scaling method, apply WB multipliers here to undo their
// effect later, since fixed wb compensation was not used in previous version.
// Also, undo the effect of the autoGainComp factor (see below).
filmBaseValues[c] /= pow_F((wb_mul[c] / autoGainComp), 1.f / exps[c]);// / autoGainComp;
}
for (int c = 0; c < 3; ++c) {
// Find median values for each channel
if (!cvs[c].empty()) {
findMinMaxPercentile(cvs[c].data(), cvs[c].size(), 0.5f, medians[c], 0.5f, medians[c], true);
medians[c] = pow_F(rtengine::max(medians[c], 1.f), exps[c]);
// Determine the channel multiplier so that N times the median becomes 65k. This clips away
// the values in the dark border surrounding the negative (due to the film holder, for example),
// the reciprocal of which have blown up to stellar values.
mults[c] = MAX_OUT_VALUE / (medians[c] * 24.f);
}
}
t3.set();
// Calculate multipliers based on previously obtained film base input values.
// Apply current scaling coefficients to raw, unscaled base values.
std::array<float, 3> fb = {
filmBaseValues[0] * scale_mul[0],
filmBaseValues[1] * scale_mul[1],
filmBaseValues[2] * scale_mul[2]
};
if (settings->verbose) {
printf("Sample count: %zu, %zu, %zu\n", cvs[0].size(), cvs[1].size(), cvs[2].size());
printf("Medians: %g %g %g\n", static_cast<double>(medians[0]), static_cast<double>(medians[1]), static_cast<double>(medians[2]));
printf("Computed multipliers: %g %g %g\n", static_cast<double>(mults[0]), static_cast<double>(mults[1]), static_cast<double>(mults[2]));
printf("Median calc time us: %d\n", t3.etime(t2));
printf("Input film base values: %g %g %g\n", fb[0], fb[1], fb[2]);
}
for (int c = 0; c < 3; ++c) {
// Apply channel exponents, to obtain the corresponding base values in the output data
fb[c] = pow_F(rtengine::max(fb[c], 1.f), exps[c]);
// Determine the channel multiplier so that the film base value is 1/512th of max.
mults[c] = (MAX_OUT_VALUE / 512.f) / fb[c];
// Un-apply the fixed WB multipliers, to reverse their effect later in the WB tool.
// This way, the output image will be adapted to this fixed WB setting
mults[c] /= wb_mul[c];
// Also compensate for the initialGain difference between the default scaling (forceAutoWB=true)
// and the non-autoWB scaling (see camInitialGain).
// This effectively restores camera scaling multipliers, and gives us stable multipliers
// (not depending on image content).
mults[c] *= autoGainComp;
}
if (settings->verbose) {
printf("Output film base values: %g %g %g\n", static_cast<double>(fb[0]), static_cast<double>(fb[1]), static_cast<double>(fb[2]));
printf("Computed multipliers: %g %g %g\n", static_cast<double>(mults[0]), static_cast<double>(mults[1]), static_cast<double>(mults[2]));
}
constexpr float CLIP_VAL = 65535.f;
t3.set();
MyTime t1, t2, t3;
t1.set();
if (ri->getSensorType() == ST_BAYER) {
#ifdef __SSE2__
@@ -264,6 +428,7 @@ void rtengine::RawImageSource::filmNegativeProcess(const procparams::FilmNegativ
#ifdef _OPENMP
#pragma omp parallel for schedule(dynamic, 16)
#endif
for (int row = 0; row < H; ++row) {
int col = 0;
// Avoid trouble with zeroes, minimum pixel value is 1.
@@ -274,14 +439,18 @@ void rtengine::RawImageSource::filmNegativeProcess(const procparams::FilmNegativ
#ifdef __SSE2__
const vfloat expsv = _mm_setr_ps(exps0, exps1, exps0, exps1);
const vfloat multsv = _mm_setr_ps(mult0, mult1, mult0, mult1);
for (; col < W - 3; col += 4) {
STVFU(rawData[row][col], vminf(multsv * pow_F(vmaxf(LVFU(rawData[row][col]), onev), expsv), clipv));
}
#endif // __SSE2__
for (; col < W - 1; col += 2) {
rawData[row][col] = rtengine::min(mult0 * pow_F(rtengine::max(rawData[row][col], 1.f), exps0), CLIP_VAL);
rawData[row][col + 1] = rtengine::min(mult1 * pow_F(rtengine::max(rawData[row][col + 1], 1.f), exps1), CLIP_VAL);
}
if (col < W) {
rawData[row][col] = rtengine::min(mult0 * pow_F(rtengine::max(rawData[row][col], 1.f), exps0), CLIP_VAL);
}
@@ -295,6 +464,7 @@ void rtengine::RawImageSource::filmNegativeProcess(const procparams::FilmNegativ
#ifdef _OPENMP
#pragma omp parallel for schedule(dynamic, 16)
#endif
for (int row = 0; row < H; row ++) {
int col = 0;
// Avoid trouble with zeroes, minimum pixel value is 1.
@@ -321,30 +491,34 @@ void rtengine::RawImageSource::filmNegativeProcess(const procparams::FilmNegativ
const vfloat multsv0 = _mm_setr_ps(multsc[0], multsc[1], multsc[2], multsc[3]);
const vfloat multsv1 = _mm_setr_ps(multsc[4], multsc[5], multsc[0], multsc[1]);
const vfloat multsv2 = _mm_setr_ps(multsc[2], multsc[3], multsc[4], multsc[5]);
for (; col < W - 11; col += 12) {
STVFU(rawData[row][col], vminf(multsv0 * pow_F(vmaxf(LVFU(rawData[row][col]), onev), expsv0), clipv));
STVFU(rawData[row][col + 4], vminf(multsv1 * pow_F(vmaxf(LVFU(rawData[row][col + 4]), onev), expsv1), clipv));
STVFU(rawData[row][col + 8], vminf(multsv2 * pow_F(vmaxf(LVFU(rawData[row][col + 8]), onev), expsv2), clipv));
}
#endif // __SSE2__
for (; col < W - 5; col += 6) {
for (int c = 0; c < 6; ++c) {
rawData[row][col + c] = rtengine::min(multsc[c] * pow_F(rtengine::max(rawData[row][col + c], 1.f), expsc[c]), CLIP_VAL);
}
}
for (int c = 0; col < W; col++, c++) {
rawData[row][col + c] = rtengine::min(multsc[c] * pow_F(rtengine::max(rawData[row][col + c], 1.f), expsc[c]), CLIP_VAL);
}
}
}
t4.set();
t2.set();
if (settings->verbose) {
printf("Pow loop time us: %d\n", t4.etime(t3));
printf("Pow loop time us: %d\n", t2.etime(t1));
}
t4.set();
t2.set();
PixelsMap bitmapBads(W, H);
@@ -354,6 +528,7 @@ void rtengine::RawImageSource::filmNegativeProcess(const procparams::FilmNegativ
#ifdef _OPENMP
#pragma omp parallel for reduction(+:totBP) schedule(dynamic,16)
#endif
for (int i = 0; i < H; ++i) {
for (int j = 0; j < W; ++j) {
if (rawData[i][j] >= MAX_OUT_VALUE) {
@@ -367,11 +542,11 @@ void rtengine::RawImageSource::filmNegativeProcess(const procparams::FilmNegativ
interpolateBadPixelsBayer(bitmapBads, rawData);
}
}
else if (ri->getSensorType() == ST_FUJI_XTRANS) {
} else if (ri->getSensorType() == ST_FUJI_XTRANS) {
#ifdef _OPENMP
#pragma omp parallel for reduction(+:totBP) schedule(dynamic,16)
#endif
for (int i = 0; i < H; ++i) {
for (int j = 0; j < W; ++j) {
if (rawData[i][j] >= MAX_OUT_VALUE) {
@@ -386,10 +561,10 @@ void rtengine::RawImageSource::filmNegativeProcess(const procparams::FilmNegativ
}
}
t5.set();
t3.set();
if (settings->verbose) {
printf("Bad pixels count: %d\n", totBP);
printf("Bad pixels interpolation time us: %d\n", t5.etime(t4));
printf("Bad pixels interpolation time us: %d\n", t3.etime(t2));
}
}