CTO of Drip.Art (now Comfy Org) San Francisco CA January 2023 — 2024
Led technical development and managed engineering team from prototype to production-scale AI generation platform while managing investor relations and strategic direction, evolving from custom portrait generation to video stylization, then executing strategic pivot to ComfyUI workflow productionization infrastructure.
- Led pre-seed fundraising efforts, securing funding from investors to scale the company through multiple product iterations and strategic pivots while maintaining technical leadership responsibilities.
- Developed evaluation frameworks for personalized model quality and user preference alignment, analyzing engagement patterns across 10k+ users to identify market opportunity in workflow infrastructure over consumer generation.
- Managed a small team of fulltime engineers and contractors, across full-stack development, MLE, and DevOps, through regular 1:1s, reviews, and career development planning.
- Executed strategic pivot to ComfyUI workflow productionization, partnering with ComfyUI founder to integrate technology and rebrand company as Comfy Org.
- Built multi-tenant ML serving infrastructure supporting hundreds of custom models concurrently, with intelligent caching and batching for personalized AI-generated content.
- Architected multi-cloud GPU orchestration system across AWS, Lambda Labs, and CoreWeave with cost-optimized scheduling and multi-gigabyte model checkpoint management.
- Designed automated training pipelines for user-specific model fine-tuning (LoRAs/Dreambooth), enabling rapid personalization of Stable Diffusion models.
- Maintained sub-6 hour initial generation and sub-5 minute subsequent generation SLAs while optimizing for cost efficiency at scale.
- Skill highlights: Python, PyTorch, Stable Diffusion, ComfyUI, Kubernetes, Firebase, PostgreSQL, Multi-cloud GPU Infrastructure, ML Model Serving, Engineering Management
Meta Palo Alto CA May 2022 — December 2022
Worked on the Ads Core ML team developing and optimizing AutoML systems responsible for maintaining and tuning hundreds of advertising models through automated retraining and architecture improvements.
- Enhanced model feature selection methodology by extending feature importance metrics to identify negatively impacting features, improving model performance by automatically removing features that degraded holdout performance.
- Led cross-team initiative to benchmark and optimize AutoML systems, developing attribution frameworks to isolate and quantify performance gains from different automated approaches while accounting for confounding factors across model scales and training regimes.
- Skill highlights: Python, PyTorch, AutoML, Machine Learning Infrastructure, Model Optimization
Google Mountain View CA December 2016 — April 2022
Gboard — Federated Analytics and Machine Learning July 2019 - April 2022
I worked on a modeling team for Gboard and partnered with international teams on private federated learning. We identified ‘private heavy hitters’, or the most frequent items of a dataset, without centralized logging of what a user does on their keyboard. With those heavy hitters, we train models to power new experiences or improve existing models to adapt to changing user needs.
- Designed, implemented and shipped our team's first on-device personalized models.
- Shipped two iterations of sticker pack recommendations using collaborative filtering, one using item-item collaborative filtering based on privately aggregated item co-share data, and another using user-item embeddings.
- Impact: Significant increase in engagement with recommendations and forged several processes (e.g., diagnosing data corruption in FA tasks, federated model evaluation).
- Developed new federated tasks to securely aggregate heavy-hitter data in anonymized batches, using Invertible Bloom Lookup Tables (IBLT), secure aggregation protocols, and differential privacy.
- Used heavy-hitter data to improve the quality of typing language models, emoji search, and emoji suggestions.
- Optimized federated models by creating custom TensorFlow ops.
- Identified, diagnosed, and resolved barriers to private heavy hitter tasks outside my direct team.
- Completed Google's basic management training, interviewed multiple software engineer candidates, mentored team members through documentation and 1:1s, presented in internal tech talks on collaborative filtering.
- Skill highlights: Python, TensorFlow, TensorFlow Federated, Java, C++, IBLTs, Flume, Apache Beam, data pipelines, differential privacy, secure aggregation, collaborative filtering
Google Search — Search Frontend December 2016 — July 2019
Google.com Search consists of multiple teams with different search verticals. My work on a horizontal infrastructure team was to optimize a slice of the stack for improved developer velocity and to optimize the billions of search result pages served to end users every day.
- Analyzed metrics and deployed experiments to identify and implement optimizations in the JavaScript and rendering for mobile web search. These optimizations improved the speed of the search experience for billions of users.
- Wrote efficient JavaScript libraries and polyfills for the Google search results page used by virtually every mobile query. Supported multiple accessibility modalities and browsers as old as iOS 8 Safari.
- Increased search developer velocity by developing easy-to-use APIs around async requests, without sacrificing performance for the end user. These libraries power the async experience of several search verticals, such as Jobs Search.
- Presented and coordinated frontend portions of Search Features Bootcamp, a semi-annual course on developing for Google Search, three times. This involved developing and testing course materials in a dry run with trial participants. I administered the actual presentation and labs, and gave additional availability as a point of contact for feature developers who had questions after the class.
- Supported a cross-platform component library for Google Search, which served mobile web surfaces and native mobile views in the Google Assistant on both Android and iOS.
- Planned, pitched and directly managed 2 STEP interns on projects around client side rendering on the search results page, providing daily mentorship, code reviews, and career guidance that led to full-time offers for both interns.
- Skill highlights: JavaScript, TypeScript, HTML/CSS, Java, Python, Objective-C, Protobuf, Blaze/Bazel, People Management, Mentorship
Apple Cupertino CA May 2013 - December 2016
Fullstack engineering for the Apple Instructional Design department, including team leadership responsibilities. This ranged from internal tooling for authors and localizers, to creating new user-facing instructional experiences ahead of product launches.
- Created a custom CMS solution to manage authoring, localization and review of content for the Tips app.
- Translated designs to localizable HTML/CSS/JavaScript for "Quick Tours," interactive introductions to macOS or first party apps with over 20 supported languages. (e.g., help.apple.com/osx-mavericks/whats-new).
- Lead exploratory project for a help chat bot to recommend relevant support content, directly managing an intern and coordinating with cross-functional stakeholders.
- Completed Apple University management training and mentored junior engineers on web localization best practices.
- Skill highlights: JavaScript, CSS, Ember.js, Node.js, Objective-C, Postgres, Web Localization
Education
Rochester Institute of TechnologySeptember 2009 - May 2013
Bachelors of Science in Computer Science, from Rochester Institute of Technology. Minors in Mathematics and Economics.
Stanford Online, CourseraApril 2016
Certificate from Andrew Ng's Machine Learning course on Coursera.
Other Projects
Blaze/Bazel Build Notifying Widget 2017
When I joined Google Search, I found that I often missed when the build was done, especially if it had failed early. A cold build could take 30-45 minutes even on my specialty machine with 12 cores and 128gb of ram, which meant context switching whenever blaze's cache was too old. So I built this LED notification widget to let me know when the build had finished with attention-grabbing lights. I provided parts to my coworkers in my team and taught a quick class on assembling / programming the microcontroller as a blaze status indicator.
This widget connects to the host machine over usb and was posted somewhere noticeable in my office. I wrote a small python daemon for the build machine that polls my blaze server for updates in its status. When it detects a change, it pushes the updated status over usb to a Teensy microcontroller, which controls a ring of ws2812b LEDs, pulsing all red for failure or all green for success. The walls and the back of the enclosure are laser cut wood pieces, and the front is white acrylic to defuse the LEDs inside. The back wood panel is held on by a single screw that, when loosened, allows the panel to swivel open for servicing.


Kinect the Dots 2015
Spearheaded an electronic art installation called 'Kinect the Dots', a large-scale grid of approximately 1,900 LEDs which displays RGB silhouettes of nearby people in real time. This has been showcased at the Bay Area Maker Faire (received 2 editor's choice awards), Burning Man, and Santa Cruz Glow.

C++ code interprets Kinect point cloud data on MacBook, which finds people in the scene and pushes display data to C program on a Teensy microcontroller. The Teensy drives the RGB LEDs as fast as it receives frames from the MacBook. The LEDs are behind holes in mirrored acrylic, and in front of that mirror is a two-way-mirrored acrylic sheet, which bounces some of the light back into the mirrored acrylic creating an "infinity mirror" effect.
I worked on every layer of the stack, starting with fleshing out the artistic vision, architecting the different parts, sourcing materials, power calculations, machining, assembly, microcontroller code, wiring, developing a wire format over usb, and the c++ code that used Prime Sense's pose detection models to find people in the scene.

