|I received my PhD from Stanford University supported by a Hertz Fellowship where I worked with Pat Hanrahan. I completed my undergraduate degree at Caltech working with Mathieu Desbrun. During a postdoc at Stanford, I have also worked on the data analytics team at Khan Academy. Currently, I am a Principal Research Scientist at Adobe Research. My research focuses on combining computer graphics, vision, and machine learning to make it faster and more fun to complete creative tasks.|
TechnologyI've helped developed several tools that have shipped in different Adobe products.
Tracing, coloring, and shading vector artwork can be a very exacting and time-consuming process. Project Sunshine jump starts this process by helping artists to add vibrant colors and shading to their artwork. Artists can select a character or object and Project Sunshine will use machine learning to understand what the artwork represents and suggest many different ways to color it. Project Sunshine can also help bring shadows and highlights to artwork while letting the artist experiment with lighting from multiple directions.
We added powerful 3D raytracing features to Illustrator. I helped write the Monte Carlo denoiser, based off our SIGGRAPH 2021 paper Interactive Monte Carlo Denoising using Affinity of Neural Features.
This technology takes your sketches, photographs, or other artwork and converts it into vector graphics. I trained a series of deep learning models that help the vectorize engine understand what parts of sketches to focus on, which lets it ignore aspects like shadows, smudges, grid lines, and other artifacts.
We built a tool to help artists easily manipulate vector artwork while perserving core shape properties such as right angles or perfect circles.
Automatically generate transcripts and add captions to your videos to improve accessibility and boost engagement with Speech to Text in Premiere Pro. I trained some of the core deep learning language models that determine how and where to split transcripts up into captions.
This technology lets you automatically recolor vector artwork to match the colors of a target photograph or palette.
This tool uses a generative deep network to automatically colorize black and white photographs and can optionally incorporate user-guided coloring suggestions.
The freeform gradients tool naturally lets you control the diffusion of colors across your vector graphics and is one of many ways we are looking into making the Gradient Mesh tool easier and faster to use.
The renderer in Adobe Dimension was updated to use a deep network that efficiently reduces the image noise due to Monte-carlo sampling in its photorealistic renderer.
Often, your design contains multiple copies of similar objects, such as logos. If there is a need to make an edit to all such objects, you can use the global editing tool to edit all similar objects in the design in one step.
Puppet Warp lets you twist and distort parts of your artwork, such that the transformations appear natural. You can add, move, and rotate pins to seamlessly transform your artwork into different variations using the Puppet Warp tool in Illustrator.