Sometimes you get lucky on the first try.
(Don’t ask me how many times I completely struck out trying to improve this image.)
🙏 Thank you! 🙏
Sometimes you get lucky on the first try.
(Don’t ask me how many times I completely struck out trying to improve this image.)
🙏 Thank you! 🙏
The big companies will never catch up.
They won’t be the first to know, even if they are the first to hear.
The tech scout will have a call with the business unit. This will take a few weeks to schedule.
If it’s really interesting, the business unit and the tech scout will have to write a report for the leadership team.
If that goes well, and assuming the CEO also sees something about it on social media, then a committee will be formed to investigate.
The committee will have to write a strategy, a timeline, a budget, and a risk assessment. Maybe even a policy.
No one in the company knows how to do this new thing, so they will have to hire a consultant.
They’ll need multiple competing bids first before they select a consultant. Then the consultant will have to write a report for the committee.
The committee will have to write a report for the leadership team.
The leadership team will have to write a report for the board.
The board will have to make a decision.
A team will have to be formed to implement the decision.
The committee will have to hire the team.
The team will have a lot of new blood, really excited to make a difference.
But before they can do anything, they will have to write a report for the committee.
By the time the budget is approved, the team will have gotten the message that they don’t actually need to do anything to continue to get paid.
But they will try to do something anyway. And it will take longer than expected. And it will cost more than expected. And it will be worse than expected.
The team will have to present it to the committee.
The committee will have to present it to the leadership team.
The leadership team will have to present it to the board.
At this point, no one will remember why they wanted to do this in the first place.
All the while, the small company will have been doing it…with just a few people who are really excited to make a difference, and AI agents to implement their vision.
Apologies for the delay, I’ve been walking on the edge of infinity. It has been quite consuming.
I’m still practicing my discernment.
Some key learnings:
Once a year, on the day of the Berkeley Half Marathon, I try to make a point to leave extra early to get to yoga, because they shut down my normal freeway exit.
This year, it didn’t matter - life had another plan for me. Right when I got between the two exits that were closed for the race, emergency vehicles started pushing through the traffic, and closed every single lane of the 80.
At first I was quite grumpy. Then the woman in the car behind me started to cheer on the runners - who were just a few feet away from us.
Then I remembered: I know exactly what to do in this situation. My Burning Man skills kicked in. I rolled down the window and pumped up the jams for the runners.
Sometimes you have to slow down to speed up.
Other times you realize you’re in a rabbit hole.
Today was a double graduation day. I graduated from both Mayo Oshin’s Build a ChatGPT Chatbot For Your Data and TREW Marketing’s Content Writing, Engineered.
I highly recommend both courses. I’m already using the skills I learned in both courses to improve my business. A few relevant links to what I’ve learned and completed:
Cheers!
I’m developing some ideas around using ensembles of LLMs for specific tasks. Today I’m sharing a preview of the first “trivial” example, which is the foundation of linking arrays of LLMs to the concept of ensembles from statistical mechanics.
In this example, an ensemble of LLMs are called with the same prompt, but “high” temperature (of 0.99). There is a “prompt” and a “grading criteria” that are used to create the full prompt.
The only variance is the from the “temperature”, which you can see doesn’t result in much variety.
Then the “judge” is asked to rank the results, based on the grading criteria, with the judge having a low temperature (0.1):
The output is not impressive. But from here we can start to build up to more interesting examples.
Much more to come on this topic. Stay tuned. 📻
Today I had a “holy shit” AI moment. (Which has been happening quite frequently.)
Corrie Who Writes turned me onto this plugin for Google Sheets called GPT for Sheets™ and Docs™.
Basically, it adds a bunch of functions to Sheets that help interface with OpenAI (in both directions). Every cell can run it’s own API call (in parallel). You can reference other cells. Combine, list, split, it’s really nuts.
If LLMs weren’t already capable of generating more text than anyone could possibly read or use, this really seals the deal. Quantity: Check ✅. See below for about 20 question and answer pairs generated using this plugin in under 1 minute, from my initial input of just 4 questions (and no answers).
Quality: In Progress ⌛. This is more work. Especially if there is too much content to have a human editor. Stay tuned. 📻
Here’s a “deep work” 1,2 punch that I have had some good results with.
Walk and talk out a very clear plan. Record a voice memo. I use the Yealink BH71 Pro headset, which has amazing noise cancelling (for the microphone) and I look like a telemarketer on a hike. Later you can transcribe this with your favorite voice transcription tool. (Optionally, get ChatGPT to format the transcription for you.) It is very important to resist the urge to use your phone, just talk out the steps, talk out the framework, talk out the plan for what you will do. When you get distracted, bring it back to the plan. Make sure you have a clear outcome in mind.
Block 90-120 minutes of “deep work” time to execute the plan. Try to get to the outcome, try to follow the steps. Maybe the plan was too ambitious, maybe you need to adjust the plan. But try to get to the outcome. Maybe you don’t need 99% of the structure you thought that you did (that’s what happened to me today). But you incept in yourself the idea of what you want to do, and then you do it. When you get distracted, bring it back to the outcome.
That’s it. Go get ’em, champ!
P.S. - I came up with an interesting theory while performing Step 1 this morning (a distracting thought from the plan I was formulating), which is that Cal Newport is actually an AI being sent to us from the future to teach us how to focus like a machine. (Think: The Terminator of “Deep Work”.) I’m not sure how to test this hypothesis.
This is the prompting course I wish I had taken months ago: https://www.deeplearning.ai/short-courses/chatgpt-prompt-engineering-for-developers/. It’s free.
Props to Mayo Oshin for the recommendation. (His course Build a ChatGPT Chatbot For Your Data is amazing.)
Why do I wish that I had found this months ago?
After watching it, I realized that many of the AI applications where I thought I would need plug-ins, or special function calling set-ups, or a proprietary/paid solution, etc — these can be solved with better prompting. It’s also helpful to see them build up to more complex use cases, step-by-step.
One of the points that Andrew Ng makes in the intro is that there is no “best prompt for X”. So instead this course teaches you the fundamentals, and more importantly - how to iterate to get to a solution that works for your application.
With these prompting fundamentals and a basic RAG pipeline (which is getting easier and easier every day) - you can really accelerate a ton of business tasks.