A 2018-19 Magic Grant Profile

By Alex Calderwood.

From algorithms that design flight paths for drones to record videos of a scene, to a 360° camera technology that helps a photographer find the best placement for their camera and lights, Brown has long supported work that makes complex storytelling tools easier to use. Among the 2018-19 Magic Grants are a number of such projects that help “non-experts” perform creative tasks, or explore entirely new forms of expression — Artistic Vision, Neverending 360, and Dynamic Brushes.

Artistic Vision

With smartphones and social media, amateur photography is everywhere — everywhere, all the time. Yet, shooting high-quality pictures is an art that still requires devotion, talent, and education to fully master. While post-processing tools like Photoshop can improve an image by adjusting the exposure or smudging out clutter in the background, the project Artistic Vision provides creative insights into a photo’s composition before its taken.

Stanford PhD student Jane E and postdoctoral research scholar Ohad Fried are working on a two separate overlays in a camera’s viewfinder aimed at different aspects of photo composition. The first is a tool for aligning objects in the frame, which work like the indicators in PowerPoint or Adobe Illustrator that provide object alignment hints. This is sort of an extension to the overlays that some cameras provide which divide the frame according to the “rule-of-thirds” for more aesthetic shots. The algorithm they are working on picks out objects that are good candidates for aligning, and then displays lines that assist in that alignment.

The second project by E and Fried is in a more prototypical state, but is even more ambitious. The idea behind it is to encourage the photographer to remove clutter from the image while lining up the shot. It works by displaying an abstract version of the image (think geometrical shapes—squares and circles) that bring attention to the extraneous objects in the background or at the edges of the frame. Studies with the prototype showed that users realize that for instance, “there are a bunch of bikes in the background”, and adjust the shot, according to E.

E says that at a high level they are interested in designing camera interfaces that can give better feedback. Eventually, the team hopes that their work will be able to address even higher-level aspects of photography: not just composition, but tools that can help users understand how to use an image to tell a story or create a mood. She admits that she doesn’t know yet what such a tool would look like, but that studying these questions will advance research in human-computer interaction and help novice and master photographers alike.

Neverending 360

The past four years have seen an explosion of technologies supporting the creation of 360-degree video, from new omnidirectional camera rigs to 360-degree streaming services on platforms like YouTube and the New York Times. This technology promises to increase viewers’ immersion into recorded video, giving them a heightened sense of place and the freedom to focus on what catches their attention within the recording. The technology has demonstrated its usefulness for journalism, transporting viewers into active warzones and simulations of solitary confinement.

But as anyone who has watched a few 360-degree videos can attest, the freedom of being able to change your perspective at any point can be disorienting and confusing: the action in the video continues despite where you are looking, so that important events might take place behind or above you, or elsewhere outside your field of vision. Because directors have less control of what their audience sees, each viewer’s experience is slightly different.

Neverending 360 is Stanford PhD student Sean Liu’s attempt to solve this problem of direction. For the project, her team built a 360-degree video editor that allows authors to specify “visual triggers” which deploy when an important event is approaching in the video and the viewer is looking elsewhere. When attention shifts back to the action area, the video resumes. Their goal is to deliver an authoring tool that will “ensure coherent storytelling” says Liu. The tool allows authors to define these contextual play/pause events, blurring traditional lines of interactivity and ensuring that the experience can be followed as a clear, linear piece of storytelling.

Dynamic Brushes

Jennifer Jacobs, a Brown Institute Postdoctoral Fellow, is working on computational drawing tools for visual artists. Her work, in addition to its artistic ambitions, has applications in Computer Aided Design (CAD) systems that assist digital fabrication, automated manufacturing, and animation: domains in which artists will benefit by an increased ability to define the “algorithmic processes” behind the design.

One of her projects, Dynamic Brushes, is a hybrid between a stylus-based drawing interface and a symbolic programming language — artists design algorithms that transform how gestures they make with their stylus get drawn to the screen, while also providing a canvas that interacts with their input.

The project builds on a rich legacy of programming interfaces designed to assist artistic practices. For example, the Java-based language Processing is based on the notion of a “sketch” that draws to a display region or canvas —  creating a graphical composition is like traditional programming. Instead, Jacobs says Dynamic Brushes’ interface is most similar to Max/MSP, a “visual programming language” for audio processing and music composition. It uses a graphical representation of algorithms rather than the “literate programming” paradigm of languages like Processing, that produce graphical effects with words and grammar.

When describing how artists might use Dynamic Brushes, Jacobs stresses that she is working from the model of building creative tooling, as opposed to a collaborator model, “where both entities are on equal terms but have different abilities”, or an apprentice model, “where the human is telling the computer what to do.”

Jacobs’ work won CHI’s best paper award in 2018.