Loka Helps Simplify Real-Time Data Gathering and Analysis with GenAI

Industry

Life Science, News,
Machine Learning,
Information Technology

Tech & TOOLS

Amazon SageMaker, AWS S3, HuggingFace, Llama2, LoRA,
TRL and PPO

Teams & Services

GenAI, Data & ML Engineering, Solutions Architecture, Project Management

milestones

0 to PoC in six weeks

Industry

Life Science, News,
Machine Learning,
Information Technology

Tech & TOOLS

Amazon SageMaker, AWS S3, HuggingFace, Llama2, LoRA, TRL and PPO

Teams & Services

GenAI, Data & ML Engineering, Solutions Architecture, Project Management

milestones

0 to PoC in six weeks

AppliedXL addresses a critical gap in biotech: the need for effective tools to access and interpret real-time
clinical trial data.

The situation

AppliedXL’s mission is to help global leaders across biotech with data-driven decision-making, leveraging machine learning and Generative AI for quicker, more nuanced clinical intelligence. AppliedXL has a world-class in-house ML and AI team and partnered with Loka to kickstart their GenAI work on Amazon SageMaker.

The goal

The collaboration aims to accelerate AppliedXL’s work within an AWS environment, tapping into Loka’s knowledge of best practices for fine-tuning LLMs hat leverage GPU optimization in Amazon SageMaker. The results arose from Loka’s GenAI Workshop, a program that streamlines the design and deployment of LLM systems with funding from AWS.

The challenge

Diverging from more common RAG-centric GenAI use cases, AppliedXL’s project demanded precise technical execution and deep knowledge of Amazon SageMaker. The key challenges were twofold:

Platform Design: AppliedXL needed a platform that allowed domain experts to contribute with their journalistic preferences.

Framework Development: AppliedXL also required a framework that would enable the LLM to incorporate human preferences when producing outputs.

The solution

During the discovery portion of our GenAI Workshop, Loka determined that the primary goal was framework development. To meet this objective, Loka established an end-to-end reinforcement learning from human feedback (RLHF) workflow tailored for AppliedXL’s unique data.

Loka devised a holistic labeling platform that allows domain experts to manually generate ideal answers and information, rank multiple LLM generations and iterate on the evaluation rubric. We also selected RLHF to ensure the LLM aligns according to subjective criteria captured via human preferences. This process is exactly where RLHF excels.

What we delivered

Custom Tool Development: Loka evaluated several open-source labeling and annotation tools and provided recommendations for AppliedXL’s specific use case. Our collaboration fueled AppliedXL’s work on a custom-designed, in-house labeling platform.

RLHF Workflow: We created an end-to-end RLHF workflow, optimized for AWS SageMaker, that aligns LLMs according to preference datasets. This workflow is a solid foundation for AppliedXL to achieve their ultimate goal of developing LLMs that follow their core journalistic principles.

Reduced Project Delays and GenAI Abandonment: After possibly shelving the RLHF project due to extreme challenges, AppliedXL worked with Loka and ended up avoiding expensive, time-consuming delays instead.

Project highlights

Loka’s specialized knowledge of customization, Amazon SageMaker and AI development provided AppliedXL with a platform that aligns with their journalistic standards.

Customized GenAI development

Loka fine-tuned LLMs to meet AppliedXL's specific journalistic standards, underscoring the importance of specialized expertise in AI for domain-centric quality output.

Advanced RLHF implementation

By creating a preference dataset and using it to train a custom reward model, Loka showcased RLHF’s ability to refine AI outputs based on expert human judgment. This approach is particularly valuable in areas where output needs to resonate with human expertise and preferences.

Efficient GenAI resource management

Leveraging Amazon SageMaker and innovative techniques for optimized, resource-efficient AI model training highlighted the value of cloud platforms in complex AI tasks.