A month ago (which frankly feels like eons at this point), Satya Nadella said the following while announcing Microsoft's new AI tools for Office:
This next generation of AI is fundamentally different from the AI that we’ve grown accustomed to around us. For years, AI has in fact powered online experiences ranging from search to social media, working behind the scenes to serve up recommendations for us or about us. From what we watched to what websites we visit to what we buy, that version of AI has become so second nature in our digital lives that we often don’t even realize or recognize it.
You could say we’ve been using AI on autopilot, and now this next generation of AI, we’re moving from autopilot to copilot. We’re already starting to see what these new copilots can unlock for software developers, for business processes like sales, marketing, and customer service, and for millions of people synthesizing information in powerful new ways through multi-turn conversational search. As we build this next generation of AI, we made a conscious design choice to put human agency both at a premium and at the center of the product. For the first time we have the access to AI that is as empowering as it is powerful.
At LinkedIn, we internally talked about establishing an internal framework for "levels of automation" for different professoinal workflows, and that our objective was to slowly move up those levels just like we're seeing occur in automotives. Satya's argument is that we should be aiming for the opposite (at least in some scenarios) and that's interesting. And I actually see the value.
Over the past few months as I've started writing more code, I've realized that I don't necessarily want ChatGPT or Copilot to write the entire code for me (it'd be great if it could), but rather just help me ship more. That's the job-to-be-done. Yes they both get things wrong every now and then, but they still make me more productive.
I've also noticed that the "right copilot" is needed for different problems:
- Business logic: When I'm thinking through how something should function, a chat functionality works a lot better. It's more like pair programming where I do want to be somewhat involved to shape the choices, or even help ChatGPT through errors that it can't seem to solve on its own (understandable since APIs it tries to use keep changing).
- Boilerplate code: There's a lot of boilerplate code that we write to ship things, and Copilot is great at solving for that. I just need it to spit out a function or something, and every once in a while tweak it. For instance, I don't always want to define types. It worked great for Cytation, for instance, by creating types for YouTube's API response which it knew while I had to keep looking at the JSON response an item at a time.
- Design: This is a gap right now. Creating front-end experiences remains a limitation for both ChatGPT and Copilot, and demonstrates you still need the right UX for the specific problem. Text can't solve everything. I'm excited to see the innovation going on over here. For now, I expect some form of "Figma-to-React" to be the solution (replace Figma with any other tool that does better, like Framer, and React with iOS or Android based on your needs). The last decade's shift towards declarative UI frameworks offer an excellent foundation across platforms.
It's possible that massive value will accrue to those that combine these into one package for an entire org.