OSA Fall Vision Meeting Spectrum Recovery Competition, 2011

The Optical Society of America Fall Vision Meeting is holding a Spectrum Recovery Competition . You can find all of the details on the official website, below I provide a Python port of their matlab code.

Spectrum Recovery Code (Python)

SimpleIllumEstimation.py (download) (view source) – the Python equivalent code of SimpleIllumEstimation.m program provided by the OSA Spectrum Recovery Competition, 2011 organizers.


  • The code requires numpy and scipy (for reading .mat files), and optionally depends on matplotlib (if you want to look at the images).

    • If someone asks, I’ll be happy to provide .npz version of the .mat files, removing the scipy dependency.
  • The code will download and extract the ContestImageData.zip file from the official website if it can not find the data directory or the zip file. So all you need to do is grab the python file and run it.

  • Feel free to uncomment the raw_input lines, to insert pauses if you are running inside IPython with an interactive matplotlib backend such that the plt.show commands do not block.

  • If the official estimatedIllumSpds.txt is not found, we will fetch it from the official website for you as well.

  • Note that this code saves the estimate computed on your machine to the file myEstimatedIllumSpds.txt, and compares is to the official estimatedIllumSpds.txt . The values stored in the two files should be identical. Indeed, I’ve ensured that the formating is the same, such that the only difference between the two is the trailing whitespace on each line found in the original which you will not find in the file the python program generates.

Please contact me with your feedback.

See also

Python for Scientific Computing – a great list of resources put together by Fernando Perez.

Source Code

View the source code below, or download it.

# SimpleIllumEstimation
# Very simple gray world illumination estimation program. Basically follows
# method introduced by Buchsbaum, 1980, J. Franklin Institute.
# This is not meant to be a particularly great algorithm, but the program
# estimates how to read the data and the format we'd like to get the 
# estimates in.  It also provides a baseline score that someone should
# be able to beat.
# 1/17/11  dhb  Wrote it.
# 1/18/11  dhb  Write out and read tab delimited text, to illustrate
#               desired contest format.
# 2011-03-18 pi Converted it to python, with automatic file fetching
# latest version of this file can be found at:
# http://pirsquared.org/research/osacontest2011/SimpleIllumEstimation.py

import os, sys, logging
import numpy as np

logging.basicConfig(level=logging.INFO, format="%(levelname)s: %(message)s")
log = logging.getLogger('SimpleIllumEstimation.py')

    import scipy.io as sio
except ImportError:
    log.critical("""Please install scipy in order to read .mat files, or
          contact Paul Ivanov requesting that he provide .npz equivalents, 
          removing the need to have scipy installed.""")

    import matplotlib.pyplot as plt
    matplotlib_installed = True
except ImportError:
    matplotlib_installed = False

## Where the data are (but we'll fetch it if it's missing -pi)
dataDir = 'ContestImageData';

## Initialize answer index
actualIndex = 1; # this was in the matlab code but not used anywhere -pi

contest_url = "http://color.psych.upenn.edu/osacontest2011/"
def fetch_file(fname):
    "download files from the OSA contest website"
    log.info("%s not found, attempting to fetch it...", fname)
    import urllib
        u = urllib.urlopen(contest_url+fname)
        if u.code != 200: #if status code is not 'OK'
            import httplib
            raise IOError("Error %d "%u.code+httplib.responses[u.code])
        f = file(fname,'w')
        f.close(); u.close();
        log.info("download succeeded.")
    except IOError, e:
        log.critical("Unable to fetch %s - please download it from:",fname)

# ensure that we have the data
if not os.path.isdir(dataDir):
    log.info("Couldn't find '%s' directory,", dataDir)
    zfname= 'ContestImageData.zip'
    dataDir = 'ContestImageData'
    if not os.path.isfile(zfname):
        log.info("or the ~50 MB %s", zfname)
        log.warn("beware that it may take a while to download %s", zfname)
    log.info("will extract %s", zfname)
    import zipfile
    z = zipfile.ZipFile(zfname)
    for member in z.namelist():

## Load basis and cone files
tmp = sio.loadmat(os.path.join(dataDir,'T_cones_osa'));

nImages = 10
estimatedIllumSpds = np.zeros((31,nImages));

## Flags

if LOOKATIMAGE and not matplotlib_installed:
    log.warn("Please install matplotlib if you want to see the images")

    imgFig = plt.figure();

PAUSE_MSG = "Paused: press Enter to continue"

## Load calibration image and use it to, well, calibrate
# drop the meta information we get when loading .mat files
[tmp.pop(k) for k in tmp.keys() if k.startswith('_')];
# now put all of the items we've stored in tmp into the local namespace

    plt.figure(imgFig.number); plt.clf();

# We know the calibration image illuminant, so use this
# to find mean surface reflectance for the calibration
# image.  We can do this, since the mean LMS responses
# are equal to the cone coordinates of the mean surface
# under the known calibration illuminant, and we have
# three dimensional linear model constraint on surfaces.
# For processing the rest of the images, we'll assume that
# mean surface reflectance is equal to that derived here.
# This isn't necessarily the case, but seems more likely to
# be right than pulling a mean out of thin air.
meanLMS = np.matrix(np.zeros((3,1)));

T_cones_osa = np.matrix(T_cones_osa, copy=False)
spd_illumSpdCal = np.matrix(spd_illumSpdCal, copy=False)
B_sur = np.matrix(B_sur, copy=False)
B_illum= np.matrix(B_illum, copy=False)
meanSurWgts = np.linalg.inv(T_cones_osa*np.diagflat(spd_illumSpdCal)*B_sur)*meanLMS;

## Estimate the illuminant for each image, on the assumption that
# the spatial mean of the surface reflectances in the images matches
# the mean derived above.
for i in range(nImages):
    # Load the image
    fpath =os.path.join(dataDir,'Image%d'%(i+1))
    theImage = sio.loadmat(fpath)['theImage']
        plt.figure(imgFig.number); plt.clf();
    # All we need from the image for this simple algorithm is the
    # mean LMS values
    # The mean LMS values let us get the illuminant, within the illuminant
    # linear model
    estimatedIllumSpds[:,i] = (B_illum*np.linalg.inv(T_cones_osa*np.diagflat(B_sur*meanSurWgts)*B_illum)*meanLMS).T

## Save out the estimates.  This is simple tab delimited text, with no
# headers, and is the format we use for the contest.
np.savetxt('myEstimatedIllumSpds.txt', estimatedIllumSpds, fmt="  % .7e", delimiter='\t')
### Load the estimates we just saved, simply to prove we know how to 
## do it.
estimatedIllumSpds = np.loadtxt('myEstimatedIllumSpds.txt')

if not os.path.isfile('estimatedIllumSpds.txt'):
estimatedIllumSpdsProvided = np.loadtxt('estimatedIllumSpds.txt')

ediff = estimatedIllumSpdsProvided - estimatedIllumSpds
# SSD: sum of the squares of differences
print "The SSD between file provided and the one calculated is %.9f"%(ediff**2).sum()

# TODO: the code below has not yet been converted to python -pi

### Compute the error, if the answers are available.
#    illumFig = figure;
#if (exist([dataDir 'actualIllumSpds.mat'],'file'))
#    SumMSE = 0;
#    load([dataDir 'actualIllumSpds']);
#    if (size(actualIllumSpds,2) ~= nImages)
#        error('Oops.  Inconsistent number of contest images');
#    end
#    for i = 1:nImages
#        # Scale the estimate to get the best MSE fit to the actual
#        scaledEstimatedSpd = estimatedIllumSpds(:,i)*(estimatedIllumSpds(:,i)\actualIllumSpds(:,i));
#        # Take difference
#        estimationDiff = actualIllumSpds(:,i)-scaledEstimatedSpd;
#        # Compute MSE
#        MSE = sum(estimationDiff.^2)/length(estimationDiff);
#        SumMSE = SumMSE + MSE;
#        # Plot, just as a reality check
#        if (PLOTESTIMATES)
#            figure(illumFig); clf; hold on
#            plot(400:10:700,actualIllumSpds(:,i),'k','LineWidth',2);
#            plot(400:10:700,estimatedIllumSpds(:,i),'r:','LineWidth',1);
#            plot(400:10:700,scaledEstimatedSpd,'r','LineWidth',2);
#            xlabel('Wavelength');
#            ylabel('Illuminant');
#            xlim([400 700]);
#            axis('square');
#            pause;
#        end
#    end
#    fprintf('Mean of MSE over illuminant estimates is #g\n',SumMSE/nImages);
#    fprintf('Sorry, you don''t get to know the actual illuminants until the contest is over!\n');

Back to my research page.


Paul Ivanov

pi (@.@) berkeley .:. edu
GPG/PGP key id: 0x0F3E28F7
fingerprint: CA3B 3BF8 394D 4E92 95E1 4B17 7BE7 2645 0F3E 28F7
Office Mailing Address
Disqus Inc
301 Howard St Suite 300
San Francisco, CA 94105