Senior Principal Machine Learning Engineer, vLLM Inference
2 weeks ago
At Red Hat we believe the future of AI is open and we are on a mission to bring the power of open‑source LLMs and vLLM to every enterprise. Red Hat Inference team accelerates AI for the enterprise and brings operational simplicity to GenAI deployments. As leading developers, maintainers of the vLLM project, and inventors of state‑of‑the‑art techniques for model quantization and sparsification, our team provides a stable platform for enterprises to build, optimize, and scale LLM deployments. You would be joining the core team behind 2025's most popular open source project ( the-top-open-source-projects-by-contributors) on Github.
As a Machine Learning Engineer focused on vLLM, you will be at the forefront of innovation, collaborating with our team to tackle the most pressing challenges in model performance and efficiency. Your work with machine learning and high performance computing will directly impact the development of our cutting‑edge software platform, helping to shape the future of AI deployment and utilization. If you are someone who wants to contribute to solving challenging technical problems at the forefront of deep learning in the open source way, this is the role for you.
Join us in shaping the future of AI
Write robust Python and C++, working on vLLM systems, high performance machine‑learning primitives, performance analysis and modeling, and numerical methods.
Contribute to the design, development, and testing of various inference optimization algorithms
Act as a core contributor for the vLLM open‑source project: reviewing PRs, authoring RFCs, and mentoring external contributors
Extensive experience in writing high performance code for GPUs and deep knowledge of GPU hardware
Strong understanding of computer architecture, parallel processing, and distributed computing concepts
Deep understanding and experience in GPU performance optimizations such as ability to reason about memory bandwidth bound vs. compute bound operations
Experience optimizing kernels for deep neural networks
BS, or MS in computer science or computer engineering or a related field. A PhD in a ML related domain is considered a plus
#AI-HIRING
#Red Hat determines compensation based on several factors including but not limited to job location, experience, applicable skills and training, external market value, and internal pay equity. This position may also be eligible for bonus, commission, and/or equity. For positions with Remote‑US locations, the actual salary range for the position may differ based on location but will be commensurate with job duties and relevant work experience.
Red Hat ( ) is the world’s leading provider of enterprise open source ( ) software solutions, using a community‑powered approach to deliver high‑performing Linux, cloud, container, and Kubernetes technologies. Spread across 40+ countries, our associates work flexibly across work environments, from in‑office, to office‑flex, to fully remote, depending on the requirements of their role. We hire creative, passionate people ready to contribute their ideas, help solve complex problems, and make an impact.
Comprehensive medical, dental, and vision coverage
● Flexible Spending Account – healthcare and dependent care
● Health Savings Account – high deductible medical plan
● Retirement 401(k) with employer match
● Paid time off and holidays
● Leave benefits including disability, paid family medical leave, and paid military leave
● Additional benefits including employee stock purchase plan, family planning reimbursement, tuition reimbursement, transportation expense account, employee assistance program, and more
Note: These benefits are only applicable to full time, permanent associates at Red Hat located in the United States.
Inclusion at Red Hat
Red Hat’s culture is built on the open source principles of transparency, collaboration, and inclusion, where the best ideas can come from anywhere and anyone. Equal Opportunity Policy (EEO)
We review applications for employment without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, ancestry, citizenship, age, veteran status, genetic information, physical or mental disability, medical condition, marital status, or any other basis prohibited by law.
We are not responsible for, and will not pay, any fees, commissions, or any other payment related to unsolicited resumes or CVs except as required in a written contract between Red Hat and the recruitment agency or party requesting payment of a fee.
Red Hat supports individuals with disabilities and provides reasonable accommodations to job applicants. xrczosw If you need assistance completing our online job application, email General inquiries, such as those regarding the status of a job application, will not receive a reply.
-
Boston, MA, United States Red Hat Full timePrincipal Machine Learning Engineer, Distributed vLLM Inference Join to apply for the Principal Machine Learning Engineer, Distributed vLLM Inference role at RedHat Job Summary At RedHat we believe the future of AI is open and we are on a mission to bring the power of opensource LLMs and vLLM to every enterprise. RedHat Inference team accelerates AI for the...
-
Boston, United States Red Hat Full timeSenior Principal Machine Learning Engineer, vLLM Inference 3 days ago Be among the first 25 applicants Job Summary At Red Hat we believe the future of AI is open and we are on a mission to bring the power of open‑source LLMs and vLLM to every enterprise. Red Hat Inference team accelerates AI for the enterprise and brings operational simplicity to GenAI...
-
Boston, United States Red Hat, Inc. Full timeJob Summary At Red Hat we believe the future of AI is open and we are on a mission to bring the power of open‑source LLMs and vLLM to every enterprise. Red Hat Inference team accelerates AI for the enterprise and brings operational simplicity to GenAI deployments. As leading developers, maintainers of the vLLM project, and inventors of...
-
Boston, MA, United States Red Hat, Inc. Full timeJob Summary At Red Hat we believe the future of AI is open and we are on a mission to bring the power of open-source LLMs and vLLM to every enterprise. Red Hat Inference team accelerates AI for the enterprise and brings operational simplicity to GenAI deployments. As leading developers, maintainers of the vLLM and LLM‑D projects, and inventors of...
-
Boston, United States Red Hat, Inc. Full timeJob SummaryAt Red Hat we believe the future of AI is open and we are on a mission to bring the power of open-source LLMs and vLLM to every enterprise. Red Hat Inference team accelerates AI for the enterprise and brings operational simplicity to GenAI deployments. As leading developers, maintainers of the vLLM and LLM-D projects, and inventors of...
-
Boston, MA, United States Red Hat Full timePrincipal Machine Learning Engineer, AI Inference page is loaded## Principal Machine Learning Engineer, AI Inferenceremote type: Hybridlocations: Bostonposted on: Posted Todayjob requisition id: R-050966## Job SummaryAt Red Hat we believe the future of AI is open and we are on a mission to bring the power of open-source LLMs and vLLM to every enterprise. Red...
-
Boston, MA, United States Red Hat Full timePrincipal Machine Learning Engineer, AI Inference page is loaded## Principal Machine Learning Engineer, AI Inferenceremote type: Hybridlocations: Bostonposted on: Posted Todayjob requisition id: R-050966## Job SummaryAt Red Hat we believe the future of AI is open and we are on a mission to bring the power of open-source LLMs and vLLM to every enterprise. Do...
-
Machine Learning Engineer, vLLM Inference
3 weeks ago
Boston, MA, United States Red Hat Full timeMachine Learning Engineer, vLLM Inference - Tool Calling and Structured Output Join to apply for the Machine Learning Engineer, vLLM Inference - Tool Calling and Structured Output role at Red Hat At Red Hat, we believe the future of AI is open, and we are on a mission to bring the power of opensource LLMs and vLLM to every enterprise. The Red Hat Inference...
-
Boston, MA, United States Red Hat Full timeJob Summary At Red Hat we believe the future of AI is open and we are on a mission to bring the power of open-source LLMs and vLLM to every enterprise. Red Hat Inference team accelerates AI for the enterprise and brings operational simplicity to GenAI deployments. As leading developers, maintainers of the vLLM project, and inventors of state-of-the-art...
-
Boston, MA, United States Red Hat Full timeJob Summary At Red Hat we believe the future of AI is open and we are on a mission to bring the power of open-source LLMs and vLLM to every enterprise. Red Hat Inference team accelerates AI for the enterprise and brings operational simplicity to GenAI deployments. As leading developers, maintainers of the vLLM project, and inventors of state-of-the-art...