Home
Case Studies
Contact Us

Open Position

AI Solutions Engineer

About Nymbl

At Nymbl, we redefine application development—combining expertise and innovation to build next-generation solutions using AI, full stack, and low-code/no-code technologies. Our learn, build, grow model ensures long-term success for our clients while creating space for our people to thrive.


Role Summary

The AI Solutions Engineer at Nymbl delivers enterprise-grade AI solutions directly with clients. Acting as a forward-deployed engineer, this role blends full-stack development expertise, applied AI/ML engineering, and strong client-facing skills. AI Solutions Engineers implement Retrieval-Augmented Generation (RAG) systems, design and deploy LLM-powered applications, and integrate AI into enterprise workflows to create measurable client outcomes.

This role blends responsibilities from:

  1. Forward-Deployed Engineer – client delivery, technical advisory, building in production environments.
  2. Machine Learning Engineer – Fine tune, and deploy LLM and RAG systems with applied AI expertise.
  3. Full-Stack Developer – enterprise-grade coding across front-end, back-end, and data layers.

Expectations

  • Leadership: Take ownership of technical implementation, guiding both clients and internal teams
  • toward scalable, production-ready AI solutions.
  • Communication: Translate complex AI concepts into clear business and technical language for
  • executives, stakeholders, and developers.
  • Autonomy: Lead end-to-end delivery of AI features and integrations, managing coding, testing,
  • deployment, and client handoff.
  • Collaboration: Partner closely with Solution Architects, Client Partners, and Developers to ensure
  • projects balance innovation, feasibility, and business value.
  • Client Engagement: Act as a trusted technical advisor in workshops, demos, and delivery reviews,
  • building confidence that Nymbl can execute reliably.

Business-as-Usual Activities

  • Design and implement RAG pipelines with LLMs and enterprise data sources.
  • Build and deploy AI agents using frameworks such as LangChain, Semantic Kernel, or custom architectures.
  • Develop full-stack AI-enabled applications (front-end, back-end, APIs, and data integrations).
  • Optimize vector databases (e.g., Pinecone, FAISS, Milvus) for retrieval and semantic search.
  • Fine-tune or adapt LLMs for industry- or client-specific needs.
  • Deploy solutions with enterprise reliability standards (Docker, Kubernetes, CI/CD).
  • Run client demos, technical workshops, and enablement sessions to accelerate adoption.
  • Collaborate with internal teams on burn tracking, utilization, and project profitability.
  • Document architectures, pipelines, and operational guidelines for client and internal use.

Key Performance Indicators (KPIs)

  • Solution Adoption Rate: % of delivered AI solutions actively used by clients after 90 days
  • Deployment Success Rate ≥ 95% of AI solutions deployed on time and functioning as expected
  • Billable Utilization ≥ 100% weighted utilization target
  • Client Satisfaction: measured through post-engagement surveys and renewal likelihood
  • Reusability Index: number of frameworks, libraries, or components reused across engagements

What Success Looks Like (6–12 Months)

  • Recognized by clients as a trusted technical advisor and partner.
  • AI solutions delivered are in production use and driving measurable outcomes.
  • Consistently anticipate and solve client technical blockers before they escalate.
  • Contribute to Nymbl’s AI playbook by codifying reusable frameworks, deployment best practices, or
  • reference architectures.
  • Demonstrated ability to work across multiple platforms and stacks, deployable in diverse client
  • environments.
  • Internal teams rely on your expertise to elevate technical standards and accelerate delivery velocity.

Common Challenges and Needed Skills

  • Enterprise data complexity: problem-solving, unstructured +structured data pipelines.
  • Rapidly evolving AI tools: continuous learning, adaptability, maturity assessment.
  • Client skepticism about AI: clear communication, proof points, framing business value.
  • Balancing innovation vs production-readiness: disciplined testing, pragmatic engineering mindset.
  • Integration into legacy systems: creativity, patience, full-stack development skills.

Technical Skills

  Prompt Engineering

  • Crafting and iterating on prompts for LLMs to achieve consistent, accurate, and enterprise-ready outputs.
  • Applying structured techniques to reduce variability and ensure responses align with client requirements.

   Retrieval-Augmented Generation (RAG) Systems

  • Designing pipelines that integrate vector databases, embeddings, and prompt templates.
  • Connecting enterprise data sources into LLM-powered workflows for context-rich responses.

   Model Fine-Tuning

  • Applying supervised fine-tuning, reinforcement learning with human feedback (RLHF), or domain adaptation.
  • Providing client-specific datasets to improve accuracy, compliance, and relevance.

   AI Agents

  • Building autonomous agents that use reasoning + tools to act within client environments.
  • Combining multiple LLM roles (e.g., planner, executor, validator) into reliable workflows.

   LLM Deployment

  • Packaging and deploying LLM solutions into client production environments.
  • Leveraging containerization, APIs, and deployment pipelines for scalability and security.

   LLM Optimization

  • Applying quantization, distillation, caching, and latency reduction techniques.
  • Balancing model performance, cost efficiency, and client SLAs.

   LLM Observability

  • Implementing monitoring for model accuracy, bias, latency, and cost.
  • Using tracing, dashboards, and evaluation frameworks to ensure reliability at scale.

Context Engineering

  • Designing workflows that bring the right data (documents, memory, tools, databases) into prompts.
  • Ensuring compliance, data governance, and high fidelity of client knowledge bases.