Until today, I would have said that Stripe has come a long way since its start as “PayPal for developers”. Despite being valued at almost $100B, the company’s main product still does not offer an option for the most common pricing scheme in the history of the world: the volume discount.
Ironically, you can set up a volume discount for a recurring subscription, but not for a one-time purchase. Apparently no one at this massive company has ever sold physical products, or wants to. The confusing thing is that the “one time” payment option gets automatically converted to “recurring” if you pick “volume” pricing. What is truly hilarious is that Stripe’s customer support has no idea why anyone would ever want to use their product for just a one-time purchase with a volume discount.
I guess you could say that they are staying true to their founding mission: developers only. While I don’t self-identify as a developer, I can tell you that a developer could fix this oversight in about 5 minutes.
I filed a feature request, but I’m not getting my hopes up. After all, I am just a confused manufacturer of physical goods, trying to implement the most ancient of discounts. And manufacturing is only a $11 trillion industry…probably not big enough for anyone at Stripe to care.
Retrieval Augmented Generation (RAG) is getting easier by the minute, I can’t keep up with the daily influx of new tools in the space. My initial experience with ChatGPT was not positive - I couldn’t believe how useless the tool was. The “hallucinations” were really what threw me off - the propensity for the LLM to just make up random information was just too high.
RAG changed all of this for me. With the ability to train the AI on my sources of truth, all of a sudden this became an indispensable tool. I really was blown away by the power of RAG to shift the functional utility of LLMs.
Today, I noticed something funny - our customer support AI seemed to have forgotten its training. It was now getting questions “wrong” that previously it had an amazing track record of answering.
The culprit? Information overload.
Originally we only trained pSai on the polySpectra documentation website and the product pages from our e-commerce store — in part because I wanted it to just be able to answer basic product questions, and in part because ran into a technical difficulty getting it to scrape the entire polyspectra website when I first set it up. Recently, I figured out how to train it on the entire polySpectra.com website, which at first seemed like a good thing.
Unexpectedly, more became less. This extra information was the RAG that broke the camel’s back. There were just enough conflicting sources of truth in the AI’s verified sources to confuse it. Where before it was doing an amazing job, now it is giving the wrong answer.
Without the sources of truth, LLMs are pretty useless customer support agents. With just enough information, they are surprisingly good. With too much, they become unhelpful again.
The role of the humans in this situation is clearly to maintain a single source of truth. It makes me wonder how many humans we’ve confused with our website over the years…much more to distill and refine.
It also makes me wonder what my “context window” is. How many things do I get confused, by having access to too many sources of information?
There is something very telling about the fact that Bing Image Creator considers this prompt to be “unsafe content”:
dia de los muertos skull, dayglow poster with extremely bright colors and black background, detailed mandala patterns and flow lines
We’ve candy-coated our dead. Halloween is for selling sugar and sexy costumes. Nothing truly scary. No room for a conversation about mortality. No room for reverence.
Disney costumes and drinks.
I’m all for the play. I’m all for the chance to assume a character.
I’m all for celebration. Death can be a tremendous celebration. I’m not suggesting we make it somber.
But death is real. And scary. And coming for each of us.
What a missed opportunity to have a really meaningful cultural conversation. It’s hard to hear the wisdom of the dead when you’re chewing on candy.
Different social media platforms have different expectations, a different culture, a different cadence. None of that interests me in the slightest.
But I am on a mission, and I have to play the game if I want anyone to read what I have to write, to hear what I have to say. RAW.works is a reset, an inquiry. How do I want to show up on the internet? How do I want to choose to engage with 8 billion people and 8 trillion robots every day?
I certainly don’t want Elon Musk choosing for me. I don’t want Alphabet choosing for me. I don’t want Apple choosing for me.
I acknowledge and respect the power of these tools. I see that if I want to show up in a search result, there are certain things I need to do to format my website for Google. With these new AI search tools like Poe Web Search [link my old post], perhaps that game is going to change a little bit. But we still need to bend to be noticed, which is increasingly true in a world where more content can be created in a day than consumed in a lifetime.
The intentionality part is: How much does that matter to me? Does SEO matter 0% to me? Does it matter 1% to me?
How far am I willing to bend my interests and my attention, to garner the interest and attention of others?
This question is at least as old as the first living organisms. The struggle to find a niche and a complex system. Initially, it was just about survival. As more and more people ascend Maslow’s Hierarchy of Needs, it becomes more cultural, more psychological, more philosophical.
But we need to be careful, because wars and pandemics, climate chaos and rogue AGI have the potential to bring us all the way back to survival.
The memelords won’t have much to offer in the way of food and shelter. Infinite jest. Wirehead now. Xanadu.
How far am I willing to bend my interests and my attention, to garner the interest and attention of others?
On a more Adlerian plane…maybe the greatest social good is achieved when we don’t bend so much. Or at least we don’t spend so much time worrying about bending. We either bend or we don’t. We pick the way we’re willing to wiggle, or maybe the wiggle picks us, but either way — we wiggle, we dance.
If you hire a mechanical turk, you are going to get buttons pushed.
If you hire a contractor, you are going to get billable hours.
If you hire an employee, you are going to get a butt in a chair.
If you hire a consultant, you are going to get a report.
If you hire an expert, you are going to get an opinion.
In none of these cases are you guaranteed to get the true enrollment of a human being. You would likely expect the the attention of a human being, but these days you might have to pay extra for that. In the case of the mechanical turk, the probability is very high that they are actually using AI to “cheat” at their “jobs”. And who can blame them? I would do the same thing.
As an exploration of the concept of enrollment, let me tell you about the time I failed to hire Don McCurdy.
I first stumbled upon Don’s work through my interest in augmented reality. He was working at Google at the time and was a major contributor to Model Viewer, as well as the three.js library that powers it. I later realized he was even contributing to the glTF specification itself, which is poised to be the JPEG of 3D models. (Any day now, you’ll see.)
While I was in the process of building polySpectra AR, I saw that Don had left Google. Between this personal announcement and noticing his developer “tip jar” on GitHub, I figured that he would be open to at least discuss the possibility of working together.*
I sent him an email. No reply. I followed up. No reply.
The funny thing was that during this time where I couldn’t get any response to my emails, Don was incredibly generous with answering lots of my programming questions that were related to the problems we were having with polySpectra AR. I quickly realized that if I was showing respect for his craft, showing I had done enough searching to attempt to solve the problem myself (or have our developers try to solve the problem themselves), and if I presented the question in the channels of Don’s choice (GitHub & Discord at the time) — then I was fairly likely to get Don to solve my problems within a day or two. Completely for free, on his own time, just for fun.
I literally could not pay this expert. I could not get him to engage in a discussion about money. I could not buy his attention. But by engaging in a real dialog in an appropriate forum, I could see that Don was clearly enrolled in the idea of helping other people solve their problems with web AR. He was particularly helpful when I asked questions about how to use the amazing library that he wrote, glTF Transform. This makes sense, it is his project, he cares deeply about it, and he is enrolled in helping other people use it.**
Don is not an outlier. This is a very common pattern I have seen with the best in the world. If you offer money, you get no reply. If you engage in a genuine dialog, you can get the best advice, from the smartest people, for free.
Money can’t buy you enrollment.
* Re-reading this, I am laughing out loud at the way this phrase slipped in: “working together”. But that’s exactly what this is all about. What does it mean to “work together”?
** It looks like Don is now experimenting with a new way to monetize his expert attention: glTF Transform Pro.
The battle rages on here at my home front as I’ve just lost connection to the information superhighway. Absolute chaos has erupted - I’ve scrambled to deploy my personal hotspot reserves but even they can only offer up a trickle of bandwidth.
Rations are being put in place - I’ve had to limit who else in the household can piggyback off my mobile connection at a given time. Streaming anything heavier than mere tweets appears to be out of the question for now.
Moral is low amongst the ranks. We’re all tired of staring at the dreaded buffering symbols that mock us endlessly. I’ve logged an urgent message with our ISP chatbot but so far their support has been insufficient.
What began as chaos has descended into full blown crisis…particularly for the younger recruits who have never known life without a steady internet stream.
The Millenial and Gen Z troops are flailing without the online world they’re so accustomed to. Endless refreshes and buffering frustration has taken its toll on morale.
Making matters worse, many have never had to endure these kinds of communications blackouts before. It’s all they’ve ever known. Now they’re scrambling to learn archaic skills like entertaining themselves without screens or getting homework done via mobile hotspots with data limitations.
I’m fighting to keep a steady leadership presence but it’s not easy wrangling troops so used to constant connectivity. Rations remain tight and options few. Unless the cavalry arrives soon with repairs, I fear a full meltdown may be imminent amongst the digitally dependent ranks.
For now all I can do is pray the older generations can keep the younger ones from losing it completely until reinforcements liberate us from this internet Iron Dome. Wish us luck out here on the dark network front! Updates to follow if we can keep it together.
Options are running thin fast. I’m seriously contemplating having to retreat to the office just to get some decent work done. Already non-essential and bandwidth-limiting programs, like VPN, have been cut off.
I managed to scrape together just enough data to transmit this rough photo giving a glimpse of the tense atmosphere (seen above). Even falling back to being “that guy” who turns off his video during our next Zoom meeting is being quietly mulled as a potential last resort.
As night falls with no repairs in sight, the fight to regain control of my internet access continues. I’ll keep transmitting updates from the digital warzone as events allow, but for now all any of us can do is keep our fingers crossed that reinforcements arrive soon to liberate our stopped up information lines. Wish us luck!
This post is about the attentional leverage of searching through AI, specifically with the Poe Web Search bot. Sometimes the AI gives me the answer in the summary, sometimes it gives me the link where I need to dig deeper, sometimes it doesn’t find anything good. Regardless…there is no attentional rabbit hole.
What are my figures of merit?
Time to a good answer — the time cost or opportunity cost.
# of tabs opened to get to a good answer — the cost of attention-switching, which I’m really trying to avoid.
One of the traps of a traditional search engine is that it really feels like you’re doing important work. You are on a detective case. You are weeding out the nonsense. You are finding the hidden gems. If you truly enjoy searching and browsing for the sake of searching and browsing, then more power to you and please don’t let me ruin your pastime.
But I ain’t got time for that ish. More importantly, I ain’t got attention for that ish. I want the answer now, even if it’s just to know that there isn’t a good answer on the web. If I can’t have it now, then I want the answer with as little attention switching as possible.
In my experience, Poe Web Search is “smarter” than Bing. In other words - better results and no ads. What’s even cooler is that once I get the answer, I can quickly input that into one of many very useful Poe bots: Claude 2, GPT-4, Code Llama, 6 Hats Helper, etc. Poe is kind of like Inspector Gadget - there is a chat bot for every job.
pSai is trained on all of polySpectra’s technical product information. We’re now using it as an AI-augmented search function, for the entire polySpectra.com website. I wrote about my initial experience building pSai here.
Resina is a much more challenging project. Our goal is to train it on all of resin 3D printing. This will be pretty hard. Resina is accessible at resin3D.ai.
Only a couple hours after emailing our list, we already have a volume of un-answered (or poorly-answered) questions that would be pretty overwhelming for a lowly human to try to respond to on their own. I’m excited by the challenge to figure out a scalable system from the start.
We’re refining a setup where Resina learns a bit more every night. So, even if it can’t answer a certain question today, the hope is that it’ll know the answer by tomorrow. There are still a lot of humans-in-the-loop, but we’re trying to make the system as automated as possible. (With humans as curators/editors of the AI’s knowledge base.)
We’re aiming for something like this:
A few fun experiences from today:
We must be doing something right, because we already have a few really angry dialogs with people who appear to be major haters of polySpectra (and maybe AI too?)
I’ve been having a lot of fun with the combination of GitHub CoPilot and Cursor.sh. CoPilot quickly tries to autocomplete while Cursor can really do the full-context heavy lifting. These tools are designed for developers (and I am not a developer), but I’m finding them to be really useful for content creation. (See yesterday’s post: ) It’s surprisingly good at coming up with new questions to ask Resina.
Very early exploration with the possibility of giving Resina a voice. ElevenLabs is super impressive.