Offset Monster Is Born

Yesterday’s Hackathon was a ton of fun.

I didn’t quite manage to finish the full workflow of Offset Monster in time for the demo, but I forced myself to present my work in progress anyways.

I met a lot of cool folks, got a ton of help from the Poe/Quora team (thank you for your patience), learned about 10 different ways to solve most of the things I was struggling with, and did sucessfully figure out how to deploy my first Poe Server bot.

Poe Hackathon at AGI House

Another time it would be fun to do a retrospective on this whole experience, but honestly I’m too tired.

Today I fixed up one of the missing pieces and now have a functional version of Offset Monster running on Poe. Check it out! This monster is ferocious and will attempt to offset the carbon footprint of anything you feed it.

Offset Monster is live now!

Making Carbon Offsets Fun, Conversational, and Contagious

Offset Monster makes carbon offsetting more accessible and viral through a fun, conversational AI bot. After calculating the carbon footprint of anything, Offset Monster makes it easy to buy a corresponding carbon offset for the item/activity in question, on the spot.

Poe Hackathon at AGI House

Today I have the great privilege of being able to participate in the Poe Hackathon at the infamous AGI House.

I am frankly quite surprised that they would allow a lowly chemist such as myself to participate. I did include links to Resina and my Six Hats Helper chatbot in my application.

I’ve never participated a hackathon before. Most of the products that I have built took five to ten years to get to a minimum viable product, not five to ten hours.

Maybe it’s because I have absolutely no experience in this culture, but I am imagining fierce competition between the “stallions” of the Silicon Valley TV show.

My imagined 'competition' at today's hackathon. Image credit 'lol valley' on YouTube.

Or maybe I’m just going to hang out with a bunch of AI nerds all day. I’m stoked either way.

Here’s what I’m building today…

Offset Monster

Making Carbon Offsets Fun, Conversational, and Contagious

Offset Monster makes carbon offsetting more accessible and viral through a fun, conversational AI bot. After calculating the carbon footprint of anything, Offset Monster makes it easy to buy a corresponding carbon offset for the item/activity in question, on the spot.

Demo: https://poe.com/OffsetMonster

(This link is likely to break repeatedly over the coming hours/days/weeks.)

an offset monster

Stuck, So Close

I’m stuck…so close. I thought I had it all ready to rock. It didn’t work like it should have. Then the backup plan was foiled because I left that one tiny thing at home.

I can’t actually test it because I don’t have access. Anything I try will just be shooting in the dark.

I can see all the pieces, but can’t actually make progress. It’s an intellectual bardo.

What do we do in these situations?

Pen and paper? Plan it out?

Surrender and take a break?

Make a plan to be sure this never happens again?

What do we do when we can’t do what we planned to do?

Stripe Is Not for Wholesale

Until today, I would have said that Stripe has come a long way since its start as “PayPal for developers”. Despite being valued at almost $100B, the company’s main product still does not offer an option for the most common pricing scheme in the history of the world: the volume discount.

Ironically, you can set up a volume discount for a recurring subscription, but not for a one-time purchase. Apparently no one at this massive company has ever sold physical products, or wants to. The confusing thing is that the “one time” payment option gets automatically converted to “recurring” if you pick “volume” pricing. What is truly hilarious is that Stripe’s customer support has no idea why anyone would ever want to use their product for just a one-time purchase with a volume discount.

I guess you could say that they are staying true to their founding mission: developers only. While I don’t self-identify as a developer, I can tell you that a developer could fix this oversight in about 5 minutes.

I filed a feature request, but I’m not getting my hopes up. After all, I am just a confused manufacturer of physical goods, trying to implement the most ancient of discounts. And manufacturing is only a $11 trillion industry…probably not big enough for anyone at Stripe to care.

RAG Information Overload

Retrieval Augmented Generation (RAG) is getting easier by the minute, I can’t keep up with the daily influx of new tools in the space. My initial experience with ChatGPT was not positive - I couldn’t believe how useless the tool was. The “hallucinations” were really what threw me off - the propensity for the LLM to just make up random information was just too high.

RAG changed all of this for me. With the ability to train the AI on my sources of truth, all of a sudden this became an indispensable tool. I really was blown away by the power of RAG to shift the functional utility of LLMs.

Today, I noticed something funny - our customer support AI seemed to have forgotten its training. It was now getting questions “wrong” that previously it had an amazing track record of answering.

The culprit? Information overload.

Originally we only trained pSai on the polySpectra documentation website and the product pages from our e-commerce store — in part because I wanted it to just be able to answer basic product questions, and in part because ran into a technical difficulty getting it to scrape the entire polyspectra website when I first set it up. Recently, I figured out how to train it on the entire polySpectra.com website, which at first seemed like a good thing.

Unexpectedly, more became less. This extra information was the RAG that broke the camel’s back. There were just enough conflicting sources of truth in the AI’s verified sources to confuse it. Where before it was doing an amazing job, now it is giving the wrong answer.

Without the sources of truth, LLMs are pretty useless customer support agents. With just enough information, they are surprisingly good. With too much, they become unhelpful again.

The role of the humans in this situation is clearly to maintain a single source of truth. It makes me wonder how many humans we’ve confused with our website over the years…much more to distill and refine.

It also makes me wonder what my “context window” is. How many things do I get confused, by having access to too many sources of information?

Candy-Coated Muertos

There is something very telling about the fact that Bing Image Creator considers this prompt to be “unsafe content”:

dia de los muertos skull, dayglow poster with extremely bright colors and black background, detailed mandala patterns and flow lines

We’ve candy-coated our dead. Halloween is for selling sugar and sexy costumes. Nothing truly scary. No room for a conversation about mortality. No room for reverence.

Disney costumes and drinks.

I’m all for the play. I’m all for the chance to assume a character.

I’m all for celebration. Death can be a tremendous celebration. I’m not suggesting we make it somber.

But death is real. And scary. And coming for each of us.

What a missed opportunity to have a really meaningful cultural conversation. It’s hard to hear the wisdom of the dead when you’re chewing on candy.

Unsafe for Whom?

Towards a taxonomy of logical families, an initial list:

  • Sangha
  • Team
  • T-group
  • Crew
  • Squad
  • Club
  • Troupe
  • Tribe
  • Collective
  • Club
  • Circle
  • House
  • Salon
  • Friday Night Skate

…and half-baked topology…

(the taxonomy is still a work in progress)

Social Flexural Modulus

Different social media platforms have different expectations, a different culture, a different cadence. None of that interests me in the slightest.

But I am on a mission, and I have to play the game if I want anyone to read what I have to write, to hear what I have to say. RAW.works is a reset, an inquiry. How do I want to show up on the internet? How do I want to choose to engage with 8 billion people and 8 trillion robots every day?

I certainly don’t want Elon Musk choosing for me. I don’t want Alphabet choosing for me. I don’t want Apple choosing for me.

I acknowledge and respect the power of these tools. I see that if I want to show up in a search result, there are certain things I need to do to format my website for Google. With these new AI search tools like Poe Web Search [link my old post], perhaps that game is going to change a little bit. But we still need to bend to be noticed, which is increasingly true in a world where more content can be created in a day than consumed in a lifetime.

The intentionality part is: How much does that matter to me? Does SEO matter 0% to me? Does it matter 1% to me?

How far am I willing to bend my interests and my attention, to garner the interest and attention of others?

This question is at least as old as the first living organisms. The struggle to find a niche and a complex system. Initially, it was just about survival. As more and more people ascend Maslow’s Hierarchy of Needs, it becomes more cultural, more psychological, more philosophical.

But we need to be careful, because wars and pandemics, climate chaos and rogue AGI have the potential to bring us all the way back to survival.

The memelords won’t have much to offer in the way of food and shelter. Infinite jest. Wirehead now. Xanadu.

How far am I willing to bend my interests and my attention, to garner the interest and attention of others?

On a more Adlerian plane…maybe the greatest social good is achieved when we don’t bend so much. Or at least we don’t spend so much time worrying about bending. We either bend or we don’t. We pick the way we’re willing to wiggle, or maybe the wiggle picks us, but either way — we wiggle, we dance.

If you hire a mechanical turk, you are going to get buttons pushed.
If you hire a contractor, you are going to get billable hours.
If you hire an employee, you are going to get a butt in a chair.
If you hire a consultant, you are going to get a report.
If you hire an expert, you are going to get an opinion.

In none of these cases are you guaranteed to get the true enrollment of a human being. You would likely expect the the attention of a human being, but these days you might have to pay extra for that. In the case of the mechanical turk, the probability is very high that they are actually using AI to “cheat” at their “jobs”. And who can blame them? I would do the same thing.

As an exploration of the concept of enrollment, let me tell you about the time I failed to hire Don McCurdy.

I first stumbled upon Don’s work through my interest in augmented reality. He was working at Google at the time and was a major contributor to Model Viewer, as well as the three.js library that powers it. I later realized he was even contributing to the glTF specification itself, which is poised to be the JPEG of 3D models. (Any day now, you’ll see.)

While I was in the process of building polySpectra AR, I saw that Don had left Google. Between this personal announcement and noticing his developer “tip jar” on GitHub, I figured that he would be open to at least discuss the possibility of working together.*

I sent him an email. No reply. I followed up. No reply.

The funny thing was that during this time where I couldn’t get any response to my emails, Don was incredibly generous with answering lots of my programming questions that were related to the problems we were having with polySpectra AR. I quickly realized that if I was showing respect for his craft, showing I had done enough searching to attempt to solve the problem myself (or have our developers try to solve the problem themselves), and if I presented the question in the channels of Don’s choice (GitHub & Discord at the time) — then I was fairly likely to get Don to solve my problems within a day or two. Completely for free, on his own time, just for fun.

I literally could not pay this expert. I could not get him to engage in a discussion about money. I could not buy his attention. But by engaging in a real dialog in an appropriate forum, I could see that Don was clearly enrolled in the idea of helping other people solve their problems with web AR. He was particularly helpful when I asked questions about how to use the amazing library that he wrote, glTF Transform. This makes sense, it is his project, he cares deeply about it, and he is enrolled in helping other people use it.**

Don is not an outlier. This is a very common pattern I have seen with the best in the world. If you offer money, you get no reply. If you engage in a genuine dialog, you can get the best advice, from the smartest people, for free.

Money can’t buy you enrollment.


* Re-reading this, I am laughing out loud at the way this phrase slipped in: “working together”. But that’s exactly what this is all about. What does it mean to “work together”?

** It looks like Don is now experimenting with a new way to monetize his expert attention: glTF Transform Pro.