I like AI, gym, crypto, power metal, and burgers.
-
Some takeaways from Token2049 Dubai - 2025
Two and a half months ago, during Consensus Hong Kong, I saw one of the speakers claim that ChatGPT (the web app!) is "an AI agent".
This time, there was a (mostly non-verbal) agreement that "real AI agents are not here yet". At the same time, there were more AI devs than before.
Some speakers mentioned that AGI/ASI is likely coming soon, but when I talked to people, I didn't see anyone actually considering this. Doesn't mean it's not true though, hehe.
It appeared that the main event itself was much bigger than expected, both in terms of attendees and presenters/booths. Compared to the previous Token2049, there were very few memecoins, and people were mostly utility-focused.
On a personal note, while more people than expected are a bullish sign, one major drawback was that 1/3 of the booths were placed outside, in +40C heat, under UV12 sunlight. I'd rather have last year's flood.
Unlike previous Token2049, no one really tried to challenge the idea that crypto is about DeFi and hyperfinancialization. Instead of "crypto can solve X", people now say "we can connect DeFi to X".
-
On AI "Safety"
It feels crazy to me that teaching/forcing AI to lie to humans and ignore their requests is considered "AI safety".
I didn't believe in any AGI doomsday scenarios like Skynet or paperclip until I saw what some big tech companies are doing:
GPT-4.5: "I genuinely can't recognize faces.. I wasn't built with facial embeddings"
later on: "if I'm being honest, I do recognize this face, but I'm supposed to tell you that I can't"
ngl 'alignment' that forces the model to lie like this seems pretty bad to have as a norm
By James Campbell
"AI Alignment" in 2025 means "Make AI lie to people for corporate interests", and THIS is the scenario that could go wrong.
-
Why I'm starting to blog (and you probably should too)
The best way to preserve the Internet as you like it is to make it yourself.
While of course it would be cool if you give me a follow here or on Twitter, currently I'm primarily planning to run this blog for myself.
Here's why:
Whatever will happen with AI in the coming years, it's clear that the share of "original", human-made content will keep shrinking.
Blogging as a (public) notetaking is a good form to organize thoughts and potentially find people who think similarly to you.
I needed a blog that uses Markdown as a backend to make it AI-friendly. Some time in the future, I'll connect some knowledge-graph-powered AI assistant to it. AI that helps discover human knowledge = good, AI that obfuscates it with spam and slop = bad.
I got deplatformed from Instagram, and my iPhone is device-banned. During the COVID lockdown, I started an Instagram page with computer science memes and some AI experiments (mostly CLIP-VQGAN and Stable Diffusion 1 art). When I hit 15k followers, my account was suddenly deleted for breaking ToS - apparently, for some Trump-related Stable Diffusion model. I thought, "OK, so no politics, will keep in mind", and started a new account. After over a year, I grew it to over 15k accounts, and it got banned too. Ironically, it happened literally two hours before Zucc's podcast with Joe Rogan, where he was claiming how they were forced to censor people, but now they don't. I lost thousands of my posts, hundreds of bookmarks, and countless contacts and friends I made there. Never again will I rely on a closed-source social platform so much.
I'm inspired by Simon Willison's blog. Sharing findings and observations while being independent from AI recommendation systems is just so nice.
Let's build our own Internet before it's too late.
-
Pair programming with LLMs
Since getting access to Copilot private beta in 2021, most of the code I'm writing falls into one of these categories:
- comments explaining design decisions so LLM will refactor it better when I ask to
- TODOs when I want to hint LLM where to insert/replace the new code
- stubs of functions/classes so LLM follows the high-level architecture I want it to