Text & Generative AI Business applications 

What is a large language model? How can it be used to enhance your business? In this conversation, Ali Rowghani, Managing Director of YC Continuity, talks with Raza Habib, CEO of Humanloop, about the cutting-edge AI powering innovations today—and what the future may hold. 

They discuss how large language models like Open AI's GPT-3 work, why fine-tuning is important for customizing models to specific use cases, and the challenges involved with building apps using these models. If you're curious about the ethical implications of AI, Raza shares his predictions about the impact of this quickly developing technology on the industry and the world at large.



Chapters (Powered by https://bit.ly/chapterme-yc) - 

00:00 - Intro

01:30 - Large Language Models (LLM)

04:32 - What is fine-tuning a model?

07:38 - Build Apps using LLM

09:46 - Future of the Developer Job

11:32 - Breakthroughs

15:17 - OpenAI Mission

17:30 - LLM for Startups

18:51 - Hiring at HumanLoop

Large Language Models as Fiduciaries: A Case Study Toward Robustly Communicating With Artificial Intelligence Through Legal StandardsArtificial Intelligence (AI) is taking on increasingly autonomous roles, e.g., browsing the web as a research assistant and managing money. But specifying goals and restrictions for AI behavior is difficult. Similar to how parties to a legal contract cannot foresee every potential "if-then" contingency of their future relationship, we cannot specify desired AI behavior for all circumstances. Legal standards facilitate the robust communication of inherently vague and underspecified goals. Instructions (in the case of language models, "prompts") that employ legal standards will allow AI agents to develop shared understandings of the spirit of a directive that can adapt to novel situations, and generalize expectations regarding acceptable actions to take in unspecified states of the world. Standards have built-in context that is lacking from other goal specification languages, such as plain language and programming languages. Through an empirical study on thousands of evaluation labels we constructed from U.S. court opinions, we demonstrate that large language models (LLMs) are beginning to exhibit an "understanding" of one of the most relevant legal standards for AI agents: fiduciary obligations. Performance comparisons across models suggest that, as LLMs continue to exhibit improved core capabilities, their legal standards understanding will also continue to improve. OpenAI's latest LLM has 78% accuracy on our data, their previous release has 73% accuracy, and a model from their 2020 GPT-3 paper has 27% accuracy (worse than random). Our research is an initial step toward a framework for evaluating AI understanding of legal standards more broadly, and for conducting reinforcement learning with legal feedback (RLLF).

"Generative AI stands to change how work happens in one industry after another. But software engineering’s transformation isn’t done yet. There’s a lot more to do for developers. Copilot, which leverages www.sequoiacap.com/article/ai-powered-developer-tools/ OpenAI’s model Codex, may be just the opening salvo in AI’s transformation of how software engineers work. Andrej Karpathy predicted in 2017 that neural networks would create a new generation of software, “Software 2.0,” and we may see the same reinvention of the tooling that helps people make software — a “Developer Tools 2.0.” Sequoia, https://www.sequoiacap.com/article/ai-powered-developer-tools/