Intel

AI Frameworks Engineer

Job description

Do you have a passion for optimizing cutting-edge datacenter and consumer SW for maximum performance on the latest HW?  We are looking for individuals who are interested in optimizing the world’s leading Machine Learning / Deep Learning frameworks for current and future Intel datacenter/consumer CPUs and GPUs.

The Intel Data Center and AI (DCAI) group creates Intel’s leading data center CPU, GPU, and AI accelerator products.  The AI SW Engineering (AISE) division is at the leading edge of the AI revolution at Intel, covering the AI stack from frameworks like PyTorch and TensorFlow, to higher level and domain SW like Hugging Face, DeepSpeed, and application reference kits.  It is an organization with a strong technical atmosphere, innovation, friendly team-work spirit, and engineers with diverse technical backgrounds.


Responsibilities include but not limited to:

  • Conduct, design, and development to build and optimize AI software.
  • Design, develop, and optimize for AI frameworks including contributing to external frameworks (e.g., TensorFlow, PyTorch).
  • Implement various distributed algorithms such as model/data parallel frameworks, and asynchronous data communication in machine learning, and/or deep learning frameworks.
  • Transform computational graph representation of neural network model, and profiles distributed deep learning models to identify performance bottlenecks.
  • Propose solutions across individual component teams including performance libraries and compilers.
  • Optimize for CPU and/or GPU backends and HW features.
  • Interact with AI researchers, customers and industry partners.

Qualifications

You must possess the below minimum qualifications to be initially considered for this position. Preferred qualifications are in addition to the minimum requirements and are considered a plus factor in identifying top candidates. Experience listed below would be obtained through a combination of your schoolwork/classes/research and/or relevant previous job and/or internship experiences. This is an entry level position and will be compensated accordingly.


Minimum Qualifications:

The candidate must have a Bachelor's degree in Electrical/Computer Engineering or Computer Science or related technical field and 3+ years of experience -OR- a Master's degree in Electrical/Computer Engineering or Computer Science or related technical field and 1+ years of experience -OR- a PhD in Electrical/Computer Engineering or Computer Science or related technical field and 1+ year of experience in:

  • 1+ year in C++/Python either through coursework or prior experience.


Preferred Qualifications:

  • Research or publications or coursework related to Deep Learning
  • Previous internship experience in the field of AI
  • Experience with Deep Learning frameworks such as PyTorch
  • Understanding of Deep Learning algorithms
  • Developing or optimizing Deep Learning models, especially low precision models
  • MLPerf benchmarks

Inside this Business Group

The Machine Learning Performance (MLP) division is at the leading edge of the AI revolution at Intel, covering the full stack from applied ML to ML / DL and data analytics frameworks, to Intel oneAPI AI libraries, and CPU/GPU HW/SW co-design for AI acceleration. It is an organization with a strong technical atmosphere, innovation, friendly team-work spirit, and engineers with diverse backgrounds. The Deep Learning Frameworks and Libraries (DLFL) department is responsible for optimizing leading DL frameworks on Intel platforms. We also develop the popular oneAPI Deep Neural Network Library (oneDNN), and new oneDNN Graph library. Our goal is to lead in Deep Learning performance for both the CPU and GPU. We work closely with other Intel business units and industrial partners.

Other Locations

US, OR, Hillsboro; US, WA, Seattle; US, AZ, Phoenix

Intel

The world's largest semiconductor chip manufacturer.