It is very easy to output nonsense. I have been able to achieve a high throughput of “SEO-optimized nonsense”, but I have been having a hard time getting technical content that isn’t more work to edit than it would have been to just write myself (or write step by step by “holding the AI’s hand”).
I also got “greedy” and was trying to build a system that would go “all the way” from ideas/keywords to fully written articles. This was too ambitious.
So now I’m taking a more modular approach. First building up the foundational concepts and research and structure, which will later serve as the training data for AI-assisted writing. My modular approach also involves a human-in-the-loop at every stage - because nothing is more annoying than propagating errors with AI.
But my eye is on scalability, so I’m making sure that each stage of the process is able to run in parallel, concurrently. In other words, the non-human steps should take the same amount of time for one or one thousand.
My big “it’s alive” moment today was getting GPTResearcher from Tavily to run concurrently:
🧟♂️ It's alive!
This did about 15 reports in about 3 minutes. I haven’t pushed to see when I hit my OpenAI API limit.
As breadcrumbs, here are the “resource reports” that I generated: https://polyspectra.com/tags/resource-reports/. These are not very engaging, nor are they meant to be, but they will serve as the foundation for the next step…
In addition to the human oversight at each step, this modular approach also let’s me mix and match the best tools for the job. Tavily is great for research, but the writing style is pretty rigid, and I don’t feel like re-writing it’s guts. So use it just for the step that it excels at.
There are three reasons why big companies are so obsessed with big data.
One, there are very few people inside the company that actually have a clue about what’s really important. (Or at least a low density of people who have a clue.)
Two, the people who do have a clue unfortunately have to justify every activity and expense to people who don’t have a clue. (And the people who don’t have a clue are usually the ones who are in charge of the budget.)
Three, big data makes it easier to draw spurious correlations. At least the people who don’t have a clue, and maybe the ones that do have a clue but just don’t understand statistics - they have no idea that the projection that they’re looking at, the extrapolation that justifies the decision, has no basis in reality.
I’m not always the best at making time for play. It’s very easy as an entrepreneur to have everything be about work in some way or another. Once the vision is big enough, there is no practical end.
This evening I discovered something wonderful (to me): WebChuck
It instantly transported me back to my days in PLOrk, where I first learned computer science, in ChucK.
(I do not recommend ChucK as your first programming language, I had a lot of unlearning to do.)
I hacked one of the demos for fun. It’s a little arpeggiator that plays a harmonic series. I added a low pass filter and an echo. I also added a slider for the filter frequency and a slider for the echo mix. Change chop to change the base frequency. Change slow to change the tempo.
Today, I made just a little bit of time for play. I’m glad I did. Maybe I’ll try it again someday.
Hello, WebChuck
Copy and paste the code below into the editor at WebChuck to play with it yourself.
// Harmonic Series Arpeggiator
// Written by Terry Feng
// CHANGE ME! ADD MULTIPLE SHREDS!
// Completely ruined by Raymond Weitekamp
1 => float chop; //let's go bro
1 => float slow;
//GUI bro
0.5 => global float f; // global variables create a GUI slider
0 => global float e;
220 => float baseFrequency; // starting frequency
12 => int numHarmonics; // number of harmonics to play
125::ms => dur noteDur; // note duration
// Unit Generator
SawOsc osc => LPF lpf => Echo a => dac;
osc.gain(0.5);
while (true)
{
// Loop through the number of harmonics
for (0 => int i; i < numHarmonics; i++)
{
// Update the oscillator frequency to the next harmonic
(baseFrequency + (i * baseFrequency))/chop => osc.freq;
//gui freqs
f * 20000 => lpf.freq;
e => a.mix;
// Advance time to play the note
(noteDur * slow) => now;
}
}
Today I started thinking a lot about the fact that the 3D printing industry is “traumatized”. I am not necessarily using this word in a clinical (DSM-5) sense, but rather in the sense that the industry has been through a really hard time. It is hard to imagine more of a rock bottom.
So in this analogy, under the hypothesis that the collective behavior of the industry is influenced by this trauma, what does that mean for marketing?
I intend to write a longer post about this, but for now I will just say that I think it means that the industry is very sensitive to any kind of “salesy” or “hypey” marketing.
Big launches with the Chicago Bulls cheerleaders (actually happened and was as cringe-worthy as it sounds) are not going to work. Neither are the “we’re going to change the world” pitches. Everyone is too f*ing tired for that.
The customers are traumatized by 40 years of false promises. The OEMs are traumatized by the complete evaporation of any investor interest in the industry. The investors are traumatized by the fact that they lost a lot of money. The markets are traumatized by the hostile takeovers and failed merger attempts. The founders are traumatized by their balance sheets. The employees are traumatized by the never-ending re-orgs. (Again, hopefully “lower case t” trauma for most, but still trauma.)
What do we need to do instead?
Build trust and rapport. Be honest. Be transparent. Be vulnerable. Be human.
I think this will ultimately be a good thing for a historically frothy industry. The “fair weather” participants are already gone. I think it will lead to a healthier and more legitimate additive manufacturing sector. I think it will lead to better products. I think it will lead to better companies. I think it will lead to more trust and education between AM companies and engineers.
I’m curious to hear what you think, especially if there are lessons learned from other industries that have been through similar experiences.
We need some therapy for the 3D printing industry.
Today I noticed a new section on Purple Space for AI exploration and discussion. This part of the description had me noodling all day:
“Perhaps the biggest change in work since the invention of electricity.”
So if AI is electricity…
…then the foundation model companies are like the utility companies.
(AI had a few more ideas to fill out the obvious)
…then data is the fuel that powers these utilities. Without it, the AI cannot function, much like a power plant without coal or gas.
…then machine learning engineers are the electricians, building and maintaining the infrastructure that allows this power to be harnessed and used effectively.
…then the algorithms are the power grids, distributing the AI’s capabilities to where they’re needed most.
…then the applications of AI, from autonomous vehicles to voice assistants, are the various appliances and devices that use electricity in different ways to perform a wide range of tasks.
…then the ethical guidelines and regulations around AI are the safety standards and regulations in the electrical industry, ensuring that this powerful tool is used responsibly and safely.
(ok back to human mode)
AI is hot right now. There are a lot of people trying to resell electricity and make a buck. But if we believe the analogy, there are only going to be a few utilities, and they are going to be heavily regulated.
What seems more interesting to me is the idea of building a business that is powered by AI. The same way that the most valuable companies of the “second industrial revolution” were the ones that were powered by electricity, not the ones that sold electricity.
Or perhaps lets take a more personal level of the analogy…
…who do you want to be? Tesla? Edison? Westinghouse? Shockley? Moore?
This week we did our first AI-assisted 6 Hats exercise with 6 Hats Helper.
It was way more fun than doing it alone. The helper was well-behaved, and in a few cases stated the “obvious” perspectives right away, so the team didn’t need to spend time naming them. At the end, it summarized everything for us to copy/paste as meeting notes.
6 Hats is a major decision-accelerator on its own, even more so when powered by AI.
Today I had an “Oh That’s Why It’s So Hard” moment courtesy of NIST, US taxpayers, and the Constitution.
The short version of the story is that it is a pain in the ass to print on LCD resin 3D printers, and there are all these inconsistencies that arise even when you specifically “tool match” a DLP printer to have the same specs (wavelength, power density, temp, etc).
Today I found out that I’m not the only one with this problem. In fact, it’s a big enough problem that NIST decided to investigate.
I’m sure we’ll write something more in depth on the polySpectra website about this.
SuperAgent has recently introduced a new feature that’s worth exploring: Custom Tools. This feature allows you to create your own tools within SuperAgent, opening up a world of possibilities for data manipulation and visualization. (and literally anything you can write a function for…)
One tool that caught my attention is the graph tool. It’s a powerful addition that allows you to visualize data in a more intuitive and insightful way.
A great example of this in action is the Super Stocks project. It uses the graph tool to visualize stock market data, making it easier to spot trends and patterns.