3+ years in applied machine learning and data scienceExperience in training and fine-tuning large-scale models (LLMs, transformers, diffusion models, etc.)Experience with parameter-efficient fine-tuning (PEFT) approaches such as LoRA, prefix tuning, adapters, and quantization-aware trainingExperience with PyTorch, TensorFlow, Hugging Face ecosystem and distributed training frameworks (e.g., DeepSpeed, PyTorch Lightning, Ray)Experience in working with large datasets, feature engineering, and data pipelines, leveraging tools such as Spark, Databricks, or cloud-native ML services (AWS SageMaker, GCP Vertex AI or Azure ML)Experience in GPU/TPU optimization, mixed precision training, and scaling ML workloads on cloud or HPC environmentsExperience in adapting foundation models to domain-specific applications through fine-tuning or transfer learning or PEFTExperience in designing, evaluating, and improving models using robust validation strategies, bias/fairness checks, and performance optimization techniquesExperience in working on applied AI problems across NLP, computer vision, or multimodal systems or any other domain