AI Frameworks Engineer - Graduate Intern

Deadline

Do you have a strong passion for optimizing cutting-edge HPC, datacenter, and client SW for maximum performance on the latest HW? We are looking for individuals who are interested in optimizing the world's leading Machine Learning / Deep Learning frameworks for current and future Intel datacenter/client CPUs and GPUs. This is a product development position with the end goal being high-quality, high-performance, secure product SW that makes the latest cutting-edge HW shine.

 

You will start optimization pre-silicon and have access to HW shortly after it is first powered on. Product innovation and publication is encouraged and there are some opportunities to collaborate with research partners to develop ideas and translate them into the product.

 

The Machine Learning Performance (MLP) division is at the leading edge of the AI revolution at Intel, covering the full stack from applied ML to ML / DL and data analytics frameworks, to Intel oneAPI AI libraries, and CPU/GPU HW/SW co-design for AI acceleration. It is an organization with a strong technical atmosphere, innovation, friendly team-work spirit, and engineers with diverse backgrounds.

 

The Deep Learning Frameworks and Libraries (DLFL) department is responsible for optimizing leading DL frameworks on Intel platforms. Our goal is to lead in Deep Learning performance for both the CPU and GPU. We work closely with other Intel business units and industrial partners. You will work on software development and optimizations in the following areas:

 

  • Analyze Deep Learning models and framework implementations to identify performance bottlenecks and optimization opportunities.

  • Accelerate the frameworks, such as PyTorch, on Intel platforms by contributing optimizations and features directly to the public framework source or to pluggable open source extension modules. These frameworks are primarily written in C++ and Python.

  • Develop low-precision high-performance versions of popular models to take advantage of new instructions and architectures designed to accelerate Deep Learning.

  • An ideal candidate would exhibit behavioral traits that indicate ability to work in a dynamic and team-oriented environment

  • Ability to work closely with teammates at multiple US sites as well as with closely related teams in other countries working virtually together on the same product

  • Positive can-do attitude, desire to deliver results and winning products Excellent written and oral communication skills

  • You should have a passion for optimization and performance at the low level, close the HW, as well as for good SW engineering practice and usability.

Qualifications

The requirements listed would be obtained through a combination of industry-relevant job experience, internship experiences, and schoolwork, classes, or research.

 

You must possess the below minimum qualifications to be initially considered for this position. Preferred qualifications are in addition to the minimum requirements and are considered a plus factor in identifying top candidates.

Minimum Qualifications:
Active student pursuing a master's degree or PhD in Computer Science, Data Analytics, or a related technical field

  • 1+ year of Experience in C/C++/Python

Preferred Qualifications:

  • Research or publications or coursework related to Deep Learning

  • Previous internship experience in the field of AI

  • Experience with TensorFlow / PyTorch

 

Inside this Business Group

The Machine Learning Performance (MLP) division is at the leading edge of the AI revolution at Intel, covering the full stack from applied ML to ML / DL and data analytics frameworks, to Intel oneAPI AI libraries, and CPU/GPU HW/SW co-design for AI acceleration. It is an organization with a strong technical atmosphere, innovation, friendly team-work spirit, and engineers with diverse backgrounds. The Deep Learning Frameworks and Libraries (DLFL) department is responsible for optimizing leading DL frameworks on Intel platforms. We also develop the popular oneAPI Deep Neural Network Library (oneDNN), and new oneDNN Graph library. Our goal is to lead in Deep Learning performance for both the CPU and GPU. We work closely with other Intel business units and industrial partners.

Other Locations

US, OR, Hillsboro; US, WA, Seattle; US, AZ, Phoenix; US, GA, Atlanta

Posting Statement

All qualified applicants will receive consideration for employment without regard to race, color, religion, religious creed, sex, national origin, ancestry, age, physical or mental disability, medical condition, genetic information, military and veteran status, marital status, pregnancy, gender, gender expression, gender identity, sexual orientation, or any other characteristic protected by local law, regulation, or ordinance.

Benefits

We offer a total compensation package that ranks among the best in the industry. It consists of competitive pay, stock, bonuses, as well as, benefit programs which include health, retirement, and vacation. Find more information about all of our Amazing Benefits here.

Annual Salary Range for jobs which could be performed in US, Washington, California: $63,000.00-$166,000.00

  • Salary range dependent on a number of factors including location and experience

Working Model

This role will be eligible for our hybrid work model which allows employees to split their time between working on-site at their assigned Intel site and off-site. In certain circumstances the work model may change to accommodate business needs.

JobType

Hybrid

 
 
 
 
 
Sponsor Organization
Intel