top of page
Writer's pictureNoor Shaker

Navigating Generative AI: From Promise to Practice

As I embarked on the journey of assuming my assistant professorship, the first wave of generative AI began to reshape the landscape. NeurIPS, the eminent gathering of AI enthusiasts, was expanding its realm to accommodate the surge of participants. Among them, pioneers and researchers exploring Generative AI were joined by a burgeoning influx of industrial stakeholders and sponsors, eager to showcase innovations and allure fresh talent.


The year was 2016, and the echoes of that pivotal NeurIPS conference lingered. Days after departing the event, I found myself at a table with colleagues, each brimming with ideas for startups in the Generative AI domain. It was a time of boundless possibilities, much like the inception of Geometric Intelligence by some of our friends, which soon found a home under Uber's wing. Acquisitions were on the rise, researchers sought pathways to secure substantial backing from venture capitalists that would pave the path to even more capital from an acquirer.


The graph below, depicting the trajectory of AI company acquisitions until mid-2017, captured the spirit of the time: "AI Startups Take The Money And Run As Big Tech Comes Acquiring."


The spotlight of that era was on computer vision, as Convolutional Neural Networks (CNNs) emerged as trailblazers of innovation. Their unprecedented performance in tasks like classification, segmentation, and super-resolution signaled a transformative shift from conventional, manually-crafted feature models. Looking beyond 2016, the peak of the Generative AI era, the subsequent three years unveiled a rising tide of acquisitions, culminating in the awakening of 2019 that we all know.




Surveying the present landscape, I'm struck by the resonance between then and now, as history seemingly replays its course. Lessons from the first wave of Generative AI hold a mirror to our contemporary endeavours, illuminating paths to navigate and reshape the innovation trajectory.


The strides taken in advancing generative AI over the past seven years have been monumental. Large language models (LLMs) have redefined not just the tasks computers undertake, but, crucially, the very essence of human-computer interaction. The shift from encoding queries with specific keywords to embracing natural dialogues marks a pivotal leap, though it's but the first step on a transformative journey on an industrial scale.


In a world inundated with news and blogs heralding AI's transformative potential, a remarkable influx of capital flushes into the sector. This year alone, an astonishing $15 billion has flowed in, dwarfing the $4 billion of the preceding year. Keep in mind that this data is skewed by a few colossal deals, such as OpenAI’s significant capital injection.



The distribution of these funds is what truly captures my attention. An overwhelming majority—$12 billion out of $13 billion—is channeled into building large foundational models, while a mere 0.04% is allocated to industrial applications, with even fewer resources dedicated to proprietary model development. This financial dynamic underscores critical considerations, and I have a few insights on the path ahead.

Foundation models will be democratised, sooner than you think:


The democratization of foundational models is not a distant horizon. A few months following OpenAI's chatbot launch, a proliferation of competitor models became publicly available and commercially-free licensed models emerged, sometimes outperforming their private counterparts. While building foundation models from scratch is currently very expensive, the landscape is swiftly evolving. Moore's Law teaches us that GPU efficiency will double while prices will decline over the next one or two years. This trajectory is a testament to the rapid pace of advancement, a far cry from the virtual queue for weeks for GPU access that characterized my early experiments in deep learning back in 2016.


Yet, data access emerges as a formidable challenge. For companies eyeing a competitive edge, the availability of large, domain-specific, clean data becomes paramount. Effective training and fine-tuning of LLMs on proprietary data will inevitably determine the winners in each sector.


Vertical integration is the way to go, but it requires patient capital and strategic thinking


The landscape remains in its early stages, with high expectations pervading the minds of companies and decision-makers grappling with the realm's possibilities and current realities. The horizon is dominated by horizontal offerings. While models like ChatGPT and Stable Diffusion are poised to revolutionize our daily tasks, they only scratch the surface of what’s possible, destined to seamlessly meld with contemporary applications, often free of charge.

Nonetheless, adoption takes time. Much akin to the inaugural wave of AI in 2016, true game-changers are those who embrace vertical integration, intertwining themselves within a specific domain's fabric to create defensibility and profound impact.


Drawing from my familiarity with healthcare and life sciences, I find examples that resonate with the future of LLMs. The transformative impact of CNNs on computer vision became tangible in 2016, but it took almost a decade for their industrial fruition. Take radiology, for instance—a domain flooded with images yet plagued by a dearth of radiologists and prone to human errors. It's a prime candidate for computer vision automation, and though the technology was ready, companies like Kheiron Medical took nearly seven years to create and commercialize their diagnostic software, underscoring the nuanced path of translation.


Likewise, domains like pathology and drug discovery are on the cusp of transformation through deep learning. Pioneers like PathAI and Recursion embarked on this trajectory in 2016 and 2013, respectively. Despite raising substantial sums—$255 million and $665 million respectively—tangible revenue remains elusive. The challenges surrounding data access/generation, processing, and curation in these domains are both formidable and defensible. There is no double however that such applications will transform healthcare and life sciences.


Looking Forward: Navigating a New Era


As the horizon broadens, the commoditization of large language models edges closer, and ultimate success rests with those who build models tailored to specific industry challenges. Now equipped with tools, the journey to realize these models has just begun. This path hinges on strategic foresight and collaboration between data owners and technology pioneers—a synergy fueled by unwavering determination and robust collaboration that paves the way for genuine transformation.


With genuine enthusiasm for the potential of this technology, I stand as a strong believer in its transformative power. To those forging ahead, particularly in healthcare and life sciences, I encourage you to connect and reach out.


Start building.


466 views0 comments

Comments


bottom of page