Software Engineer, Performance Optimization

4 weeks ago


San Mateo, United States Fireworks AI Full time

Software Engineer, Performance Optimization Join to apply for the Software Engineer, Performance Optimization role at Fireworks AI At Fireworks, we’re building the future of generative AI infrastructure. Our platform delivers the highest‑quality models with the fastest and most scalable inference in the industry. We’ve been independently benchmarked as the leader in LLM inference speed and are driving cutting‑edge innovation through projects like our own function calling and multimodal models. Fireworks is a Series C company valued at $4 billion and backed by top investors including Benchmark, Sequoia, Lightspeed, Index, and Evantic. The Role We're looking for a Software Engineer focused on Performance Optimization to help push the boundaries of speed and efficiency across our AI infrastructure. In this role, you'll take ownership of optimizing performance at every layer of the stack—from low‑level GPU kernels to large‑scale distributed systems. A key focus will be maximizing the performance of our most demanding workloads, including large language models (LLMs), vision‑language models (VLMs), and next‑generation video models. You’ll work closely with teams across research, infrastructure, and systems to identify performance bottlenecks, implement cutting‑edge optimizations, and scale our AI systems to meet the demands of real‑world production use cases. Your work will directly impact the speed, scalability, and cost‑effectiveness of some of the most advanced generative AI models in the world. Key Responsibilities Optimize system and GPU performance for high‑throughput AI workloads across training and inference Analyze and improve latency, throughput, memory usage, and compute efficiency Profile system performance to detect and resolve GPU‑ and kernel‑level bottlenecks Implement low‑level optimizations using CUDA, Triton, and other performance tooling Drive improvements in execution speed and resource utilization for large‑scale model workloads (LLMs, VLMs, and video models) Collaborate with ML researchers to co‑design and tune model architectures for hardware efficiency Improve support for mixed precision, quantization, and model graph optimization Build and maintain performance benchmarking and monitoring infrastructure Scale inference and training systems across multi‑GPU, multi‑node environments Evaluate and integrate optimizations for emerging hardware accelerators and specialized runtimes Minimum Qualifications Bachelor’s degree in Computer Science, Computer Engineering, Electrical Engineering, or equivalent practical experience 5+ years of experience working on performance optimization or high‑performance computing systems Proficiency in CUDA or ROCm and experience with GPU profiling tools (e.g., Nsight, nvprof, CUPTI) Familiarity with PyTorch and performance‑critical model execution Experience with distributed system debugging and optimization in multi‑GPU environments Deep understanding of GPU architecture, parallel programming models, and compute kernels Preferred Qualifications Master’s or PhD in Computer Science, Electrical Engineering, or a related field Experience optimizing large models for training and inference (LLMs, VLMs, or video models) Knowledge of compiler stacks or ML compilers (e.g., torch.compile, Triton, XLA) Contributions to open‑source ML or HPC infrastructure Familiarity with cloud‑scale AI infrastructure and orchestration tools (e.g., Kubernetes, Ray) Background in ML systems engineering or hardware‑aware model design Example Projects Implement fully asynchronous low‑latency sampling for large language models integrated with structured outputs Implement GPU kernels for the new low‑precision scheme and run experiments to find optimal speed‑quality tradeoff Build a distributed router with a custom load‑balancing algorithm to optimize LLM cache efficiency Define metrics and build harness for finding optimal performance configuration (e.g. sharding, precision) for a given class of model Determine and implement in PyTorch an optimal sharding scheme for a novel attention variant Optimize communication patterns in RDMA networks (Infiniband, RoCE) Debug numerical instabilities for a given model for a small portion of requests when deployed at scale Total compensation for this role also includes meaningful equity in a fast‑growing startup, along with a competitive salary and comprehensive benefits package. Base salary is determined by a range of factors including individual qualifications, experience, skills, interview performance, market data, and work location. The listed salary range is intended as a guideline and may be adjusted. Base Pay Range (Plus Equity): $175,000 USD – $220,000 USD Why Fireworks AI? Solve Hard Problems: Tackle challenges at the forefront of AI infrastructure, from low‑latency inference to scalable model serving. Build What’s Next: Work with bleeding‑edge technology that impacts how businesses and developers harness AI globally. Ownership & Impact: Join a fast‑growing, passionate team where your work directly shapes the future of AI—no bureaucracy, just results. Learn from the Best: Collaborate with world‑class engineers and AI researchers who thrive on curiosity and innovation. Fireworks AI is an equal‑opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all innovators. #J-18808-Ljbffr



  • San Mateo, United States Fireworks AI Full time

    About Us: At Fireworks, we're building the future of generative AI infrastructure. Our platform delivers the highest-quality models with the fastest and most scalable inference in the industry. We've been independently benchmarked as the leader in LLM inference speed and are driving cutting-edge innovation through projects like our own function calling and...


  • San Antonio, TX, United States Mozilla Full time

    At Mozilla Corporation, we are dedicated to making the internet a better place for everyone through innovative technology and open-source software. Join a community that has been impacting the web for over 25 years! The Firefox Performance team is passionate about delivering the fastest browsing experience available. We work on all browser components across...


  • San Francisco, United States Lightning AI Full time

    What We're Looking ForWe are seeking a highly skilled AI Optimization Engineer to work on optimizing training and inference workloads on compute accelerators and clusters, through the Lightning Thunder compiler and the broader PyTorch Lightning ecosystem. This role sits at the intersection of deep learning research, compiler development, and large-scale...


  • San Francisco, United States Lightning AI Full time

    OverviewJoin to apply for the AI Performance Optimization Engineer role at Lightning AI.Lightning AI is the company reimagining the way AI is built. After creating and releasing PyTorch Lightning in 2019, Lightning AI was launched to reshape the development of artificial intelligence products for commercial and academic use. We are on a mission to simplify...


  • San Jose, United States AMD Full time

    WHAT YOU DO AT AMD CHANGES EVERYTHING At AMD, our mission is to build great products that accelerate next-generation computing experiences—from AI and data centers, to PCs, gaming and embedded systems. Grounded in a culture of innovation and collaboration, we believe real progress comes from bold ideas, human ingenuity and a shared passion to create...


  • San Mateo, United States GroundControl Software Inc Full time

    About GroundControl Software IncGroundControl is a technology company revolutionizing quality for aerospace and defense manufacturers. Our mission is to deliver advanced software solutions that enable manufacturers to produce high-quality parts and systems with precision and confidence. These parts power the most advanced aerospace and defense systems in...


  • San Mateo, United States Zoox Full time

    Senior Software Engineer - Data Performance & Stability Zoox is building advanced self-driving hardware and software solutions. To attain the utmost efficiency that the system demands, we need you - an expert who understands how to deal with telemetry data from a growing fleet of vehicles. You will be instrumental in finding & detecting performance-related...


  • San Jose, United States Ainabl Full time

    Applications are still being accepted. Apply now!Job TypeFull-TimeWorkspaceHybrid/RemoteJob DescriptionAs a Senior Software Optimization Engineer, you’ll lead the design and implementation of software to enhance the performance of our quantum hardware systems. You’ll collaborate with engineering and research teams to ensure seamless functionality,...


  • San Jose, United States Efficient Computer Full time

    Efficient is developing the world’s most energy‑efficient general‑purpose computer processor. Efficient’s patented technology uses 100x less energy than state of the art commercially available ultra‑low‑power processors and is programmable using standard high‑level programming languages and AI/ML frameworks. This level of efficiency makes...


  • San Jose, United States Efficient Computer Full time

    Efficient is developing the world’s most energy-efficient general-purpose computer processor. Efficient’s patented technology uses 100x less energy than state of the art commercially available ultra-low-power processors and is programmable using standard high-level programming languages and AI/ML frameworks. This level of efficiency makes perpetual,...