Machine Learning Engineer, vLLM Inference
3 weeks ago
Machine Learning Engineer, vLLM Inference - Tool Calling and Structured Output Interested in this role You can find all the relevant information in the description below. Join to apply for the Machine Learning Engineer, vLLM Inference - Tool Calling and Structured Output role at Red Hat At Red Hat, we believe the future of AI is open, and we are on a mission to bring the power of open‑source LLMs and vLLM to every enterprise. The Red Hat Inference team accelerates AI for the enterprise and brings operational simplicity to GenAI deployments. As leading contributors and maintainers of the vLLM and LLM‑D projects and inventors of state‑of‑the‑art techniques for model quantization and sparsification, our team provides a stable platform for enterprises to build, optimize, and scale LLM deployments. As a Machine Learning Engineer focused on vLLM, you will be at the forefront of innovation, collaborating with our team to tackle the most pressing challenges in model performance and efficiency. In this role, you will build and maintain subsystems that allow vLLM to speak the language of tools. You will bridge the gap between probabilistic token generation and deterministic schema compliance, working directly on tool parsers to interpret raw model outputs and structured output engines to guide generation at the logit level. If you are someone who to contribute to solving challenging technical problems at the forefront of deep learning in the open source way, this is the role for you. What You Will Do Write robust Python and Pydantic, working on vLLM systems, high‑performance machine learning primitives, performance analysis and modeling, and numerical methods Contribute to the design, development, and testing of function calling, tool calling parser, and structured output subsystems in vLLM Participate in technical design discussions and provide innovative solutions to complex problems Give thoughtful and prompt code reviews Mentor and guide other engineers and foster a culture of continuous learning and innovation What You Will Bring Strong experience in Python and Pydantic Strong understanding of LLM Inference Core Concepts, such as logits processing (ie. Logit Generation → Sampling → Decoding loop) Deep familiarity with the OpenAI Chat Completions API specification Deep familiarity with libraries like Outlines, XGrammar, Guidance, or Llama.cpp grammars Proficiency with efficient parsing techniques (e.g., incremental parsing) is a strong plus Proficiency with Jinja2 chat templates Familiarity with Beam Search and Greedy Decoding in the context of constraints Familiarity with LLM inference metrics and tradeoffs Experience with tensor math libraries such as PyTorch is a strong plus Strong communication skills with both technical and non‑technical team members BS, or MS in computer science or computer engineering, mathematics, or a related field; PhD in an ML‑related domain is considered a plus The salary range for this position is $133,650.00 – $220,680.00. Actual offer will be based on your qualifications. Pay Transparency Red Hat determines compensation based on several factors including but not limited to job location, experience, applicable skills and training, external market value, and internal pay equity. Annual salary is one component of Red Hat’s compensation package. This position may also be eligible for bonus, commission, and/or equity. For positions with Remote‑US locations, the actual salary range for the position may differ based on location but will be commensurate with job duties and relevant work experience. Benefits Comprehensive medical, dental, and vision coverage Flexible Spending Account – healthcare and dependent care Health Savings Account – high deductible medical plan Retirement 401(k) with employer match Paid time off and holidays Paid parental leave plans for all new parents Leave benefits including disability, paid family medical leave, and paid military leave Employee stock purchase plan, family planning reimbursement, tuition reimbursement, transportation expense account, employee assistance program, and more Note: These benefits are only applicable to full‑time, permanent associates at Red Hat located in the United States. Equal Opportunity Policy (EEO) Red Hat is proud to be an equal opportunity workplace and an affirmative action employer. We review applications for employment without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, ancestry, citizenship, age, veteran status, genetic information, physical or mental disability, medical condition, marital status, or any other basis prohibited by law. Red Hat does not seek or accept unsolicited resumes or CVs from recruitment agencies. We are not responsible for, and will not pay, any fees, commissions, or any other payment related to unsolicited resumes or CVs except as required in a written contract between Red Hat and the recruitment agency or party requesting payment of a fee. Red Hat supports individuals with disabilities and provides reasonable accommodations to job applicants. xrczosw If you need assistance completing our online job application, email application‑ General inquiries, such as those regarding the status of a job application, will not receive a reply.
-
Boston, MA, United States Red Hat Full timePrincipal Machine Learning Engineer, Distributed vLLM Inference Join to apply for the Principal Machine Learning Engineer, Distributed vLLM Inference role at RedHat Job Summary At RedHat we believe the future of AI is open and we are on a mission to bring the power of opensource LLMs and vLLM to every enterprise. RedHat Inference team accelerates AI for the...
-
Machine Learning Engineer, vLLM Inference
3 weeks ago
Boston, MA, United States Red Hat Full timeMachine Learning Engineer, vLLM Inference - Tool Calling and Structured Output Join to apply for the Machine Learning Engineer, vLLM Inference - Tool Calling and Structured Output role at Red Hat At Red Hat, we believe the future of AI is open, and we are on a mission to bring the power of opensource LLMs and vLLM to every enterprise. The Red Hat...
-
Machine Learning Engineer, vLLM Inference
19 hours ago
Boston, MA, United States Red Hat Full timeMachine Learning Engineer, vLLM Inference - Tool Calling and Structured Output Join to apply for the Machine Learning Engineer, vLLM Inference - Tool Calling and Structured Output role at Red Hat At Red Hat, we believe the future of AI is open, and we are on a mission to bring the power of opensource LLMs and vLLM to every enterprise. The Red Hat Inference...
-
Machine Learning Engineer, vLLM Inference
3 weeks ago
Boston, United States Red Hat Full timeMachine Learning Engineer, vLLM Inference - Tool Calling and Structured Output Join to apply for the Machine Learning Engineer, vLLM Inference - Tool Calling and Structured Output role at Red Hat At Red Hat, we believe the future of AI is open, and we are on a mission to bring the power of open‑source LLMs and vLLM to every enterprise. The Red Hat...
-
Machine Learning Engineer, vLLM Inference
4 days ago
Boston, United States Red Hat Full timeMachine Learning Engineer Focused On Vllm At Red Hat, we believe the future of AI is open, and we are on a mission to bring the power of open-source LLMs and vLLM to every enterprise. The Red Hat Inference team accelerates AI for the enterprise and brings operational simplicity to GenAI deployments. As leading contributors and maintainers of the vLLM and...
-
Machine Learning Engineer, vLLM Inference
20 hours ago
Boston, United States Red Hat Full timeMachine Learning Engineer Focused On Vllm At Red Hat, we believe the future of AI is open, and we are on a mission to bring the power of open-source LLMs and vLLM to every enterprise. The Red Hat Inference team accelerates AI for the enterprise and brings operational simplicity to GenAI deployments. As leading contributors and maintainers of the vLLM and...
-
ML Engineer: vLLM Inference
3 weeks ago
Boston, MA, United States Red Hat Full timeA leading software company seeks a Machine Learning Engineer focused on vLLM Inference in Boston, MA. Before applying for this role, please read the following information about this opportunity found below. This role involves designing high-performance machine learning tools and collaborating on deep learning challenges. Ideal candidates are experienced in...
-
ML Engineer: vLLM Inference
3 weeks ago
Boston, MA, United States Red Hat Full timeA leading software company seeks a Machine Learning Engineer focused on vLLM Inference in Boston, MA. This role involves designing high-performance machine learning tools and collaborating on deep learning challenges. Ideal candidates are experienced in Python and Pydantic, understand LLM Inference Core Concepts, and possess strong communication skills.
-
Boston, United States Red Hat, Inc. Full timeJob SummaryAt Red Hat we believe the future of AI is open and we are on a mission to bring the power of open-source LLMs and vLLM to every enterprise. Red Hat Inference team accelerates AI for the enterprise and brings operational simplicity to GenAI deployments. As leading developers, maintainers of the vLLM and LLM-D projects, and inventors of...
-
Machine Learning Engineer, vLLM Inference
3 weeks ago
Boston, MA, United States Red Hat, Inc. Full timeAbout the Job At Red Hat, we believe the future of AI is open, and we are on a mission to bring the power of open-source LLMs and vLLM to every enterprise. The Red Hat Inference team accelerates AI for the enterprise and brings operational simplicity to GenAI deployments. As leading contributors and maintainers of the vLLM and LLM-D projects and inventors of...