Back to jobs
Lexsi Labs
UK & Ireland

AI Research Internship

United Kingdom
2026-03-03

Role Description

**AI Research Intern – Lexsi Labs** **Commitment:** Full-time internship (6 months; potential extension or full-time offer) **Start Date:** Rolling **About Lexsi La** bs Lexsi Labs is one of the leading frontier labs focusing on building aligned, interpretable and safe Superintelligence. Most of the work involves on creating new methodologies for efficient alignment, interpretability lead-strategies and tabular foundational model research. Our mission is to create AI tools that empower researchers, engineers, and organizations to unlock AI's full potential while maintaining transparency and safe ty. Our team thrives on a shared passion for cutting-edge innovation, collaboration, and a relentless drive for excellence. At Lexsi.ai, everyone contributes hands-on to our mission in a flat organizational structure that values curiosity, initiative, and exceptional perfo rmance.As a research intern at Lexsi.ai, you will be uniquely positioned in our team to work on very large-scale industry problems and push forward the frontiers of AI technologies. You will become a part of the unique atmosphere where startup culture meets research innovation, with key outcomes of speed and reliabil **ity. What You’ll** * do: We work on multiple frontier research ideas and challenges. If you are selected, you would be working on one of these following areas. Collaborate closely with our research and engineering teams on one of t **he a** * **reas:Library Dev** elopment: Architect and enhance open-source Python tooling for alignment, explainability, model alginment, uncertainty quantification, robustness, and machine u **nlea** * **rningExplainability \&am** p; Trust: Improve and find new observations using our and other SOTA XAI techniques (DLB, LRP, SHAP, Grad-CAM, Backtrace) across text, image, and tabular modalities to understand and present new model interpre **tabi** * **lity.Mechanistic Interpre** tability: Probe internal model representations and circuits—using activation patching, feature visualization, and related methods—to diagnose failure modes and emergent b **ehav** * **iors.Uncertainty \&a** mp; Risk: Develop, implement, and benchmark uncertainty estimation methods (Bayesian approaches, ensembles, test-time augmentation) alongside robustness metrics for foundatio **n mo** * **dels.Tabular Foundational Models** (Orion): Work with our leading Tabular Foundational Model team to improve and launch new tabular foundational model architectures and work on our leading opesource library **Tab** * **Tune.Reinforcement** Learning: Explore new ideas and algorithm around RL and our new RL fine-tuning **lib** * **rary.Research Contr** ibutions: Author and maintain experiment code, run systematic studies, and co-author whitepapers or conference sub **mi** **ssionGeneral Required Qualifica** * **tionsStrong Python** expertise: writing clean, modular, and tes **table** * **code.Theoretical fo** undations: deep understanding of machine learning and deep learning principles with hands-on experience wit **h PyT** * **orch.Transformer architectures \& fun** damentals: comprehensive knowledge of attention mechanisms, positional encodings, tokenization and training objectives in BERT, GPT, LLaMA, T5, MOE, M **amba** * **,etc.Version control \&a** mp; CI/CD: Git workflows, packaging, documentation, and collaborative development **pract** * **ices.Collaborativ** e mindset: excellent communication, peer code reviews, and agile **team** **work. Preferred Domain Expertise (Any one of these is g** * **ood) :Exp** lainability: applied experience with XAI methods such as DLB, SHAP, LIME, IG, LRP, DL-Bactrace **or Gra** * **d-CAM.Mechanistic inter** pretability: familiarity with circuit analysis, activation patching, and feature visualization for neural network in **trospe** * **ction.Uncertainty** estimation: hands-on with Bayesian techniques, ensembles, or test-time a **ugment** * **ation.Quantization \&a** mp; pruning: applying model compression to optimize size, latency, and memor **y foot** * **print.LLM Alignment** techniques: crafting and evaluating few-shot, zero-shot, and chain-of-thought prompts; experience with RLHF workflows, reward modeling, and human-in-the-loop **fine-t** * **uning.Tabular Foundati** onal Models: Should have used or improved TFMs like Orion, TabPFN **, TabI** * **CL etcPost-training adaptation \&** fine-tuning: practical work with full-model fine-tuning and parameter-efficient methods (LoRA, adapters), instruction tuning, knowledge distillation, and domain-spe **cializ** **ation.Additional Experience (Nice-to** * **-Have)** Publications: contributions to CVPR, ICLR, ICML, KDD, WWW, WACV, NeurIPS, ACL, NAACL, EMNLP, IJCAI or equivalent researc **h exper** * **ience.Open-source c** ontributions: prior work on AI/ML librarie **s or to** * **oling.Dom** ain exposure: risk-sensitive applications in finance, healthcare, or s **imilarf** * **ields.Performance** optimization: familiarity with large-scale training inf **rastruc** **tures. What W** * **e OfferRea** l-world impact: address high-stakes AI challenges in regula **ted indu** * **stries.Com** pute resources: access to GPUs, cloud credits, and prop **rietar** * **models.Compe** titive stipend: with potential for full-t **ime conv** * **ersion.Authorship** opportunities: co-authorship on papers, technical reports, and conference submi ssions.

AI Research Internship

Lexsi Labs

Sign Up →