As the name implies, multi-objective optimization problems are a class of problems in which one seeks to optimize over multiple, conflicting objectives.
Optimizing over one objective is relatively easy: given information on traffic, a navigation app can suggest which route it expects to be the fastest. But if you have multiple objectives this problem become complicated: if, for example, you want a reasonably fast route that won’t use too much gas and gives you time to take in the view outside your window.
Or, perhaps, you have multiple deadlines pending and you want to do perfectly on all of them, but you also have limited time and would like to eat and maybe sleep sometime, too. How do you prioritize your time? How do you optimize over all the possible things you could be doing?
This is not easy.
Rather than having a single, optimal solution, these problems have a set of solutions, known as the Pareto front. Each of these solutions is equally optimal mathematically, but each represents a different trade-off in optimization of the features.
Chen et al. take a somewhat different approach – designing a tool to allow a user to interact with the Pareto front, visually seeing the trade-offs each solution implicitly makes and allowing a user to select the solutions they see as best meeting their needs:
Herman Chernoff’s 1972 paper, “The Use of Faces to Represent Points in k-Dimensional Space Graphically.” The name is pretty self-explanatory: it’s an attempt to represent high dimensional data…through the use, as Chernoff explains, of “a cartoon of a face whose features, such as length of nose and curvature of mouth, correspond to components of the point.”
Here’s an example:
I just find this hilarious.
But, as crazy as this approach may seem – there’s something really interesting about it. Most standard efforts to represent high dimensional data revolve around projecting that data into lower dimensional (eg, 2 dimensional) space. This allows the data to be shown on standard plots, but risks loosing something valuable in the data compression.
Showing k-dimsional data as cartoon faces is probably not the best solution, but I appreciate the motivation behind it – the questioning, ‘how can we present high dimensional data high dimensionally?’
For one of my class projects, I’ve been reading a lot about interactive machine learning – an approach which Karl Sims describes as allowing “the user and computer to interactively work together in a new way to produce results that neither could easily produce alone.”
In someways, this approach is intuitive. Michael Muller, for example, argues that any work with technology has an inherently social dimension. “Must we always analyze the impact of technology on people,” he asks, “or is there just as strong an impact of people on technology?” From this perspective, any machine learning approach which doesn’t account for both the user and the algorithm is incomplete.
Jerry Fails and Dan Olsen fully embrace this approach, proposing a paradigm shift in the fundamental way researchers approach machine learning tasks. While classic machine learning models “require the user to choose features and wait an extended amount of time for the algorithm to train,” Fails and Olsen propose an interactive machine learning approach which feeds a large number of features into a classifier, with human judgement continually correcting and refining the results. They find this approach removes the need to pre-select features, reduces the burden of technical knowledge on the user, and significantly speeds up training.