Time is a curious thing. It’s a constantly flowing stream that can’t be paused, stopped, or repeated. We experience it, but we can’t control it. We can’t even touch or feel it.
To get a better grasp of this weird, intangible resource that governs everything around us, humanity has invented a variety of “time devices”. These devices help us to plan and optimize how we spend our time. To make the most out of the here and now.
The most popular time device is the watch. A watch is a useful tool, but its functionality is limited to the present moment. It allows us to see time, but not to manage it. It only tells us the status quo.
Calendars, on the other hand, cover the entire spectrum of time. Past, present and future. They are the closest thing we have to a time machine. Calendars allow us to travel forward in time and see the future.More importantly, they allow us to change the future.
Changing the future means dedicating time to things that matter. It means allocating our most precious resource to activities with the highest expected return on investment.
You would expect technologists and entrepreneurs to be intensely focused on perfecting such a magical time travel device, but surprisingly, that has not been the case. Our digital calendars turned out to be just marginally better than their pen and paper predecessors. And since their release, neither Outlook nor Google Calendar have really changed in any meaningful way.
Isn’t it ironic that, of all things, it’s our time machines that are stuck in the past?
The essay at hand is an exploration of what calendars could be if they weren’t stuck in time. But before we discuss their future, we first need to analyze their present status and how they fit into the rest of the productivity stack.
02 Calendars and the productivity stack
Our productivity stack consists of four types of tools:
Note-taking apps To document and organize our thoughts
Email To communicate with others
Task managers To organize the things we need to get done
Calendars To manage our time
The fact that we use four distinct tools suggests that note-taking, email, task management, and time management are four distinct activities. But when you look closer, you’ll realize that these activities are actually not that clear-cut. In fact, they all heavily overlap. Notes are just emails to your future self. Emails are just tasks. And tasks are just calendar events.
My personal workflow looks like this:
I treat my email inbox as my primary task manager (and note-taking tool).
Tasks are emails I receive from others or emails I send to myself.
I snooze emails until the week I want to get them done.
At the beginning of each week, I go through my email todo list and block time in my calendar for each task.
The email<>todo part of this workflow actually works reasonably well. Most of today’s email clients are built around the concept of Inbox Zero, which effectively turns your email inbox into a todo list with public write access.
The part we haven’t really figured out yet is the intersection between task managers and calendars.
Treating todos as calendar events is helpful because calendars introduce constraints. A calendar forces you to estimate how long each task will take and then find empty space for it on a 24 hours × 7 days grid, which is already cluttered with other things. It’s like playing Tetris with blocks of time.
So how do we get tasks into our calendars without awkwardly switching back and forth between two different apps that don’t talk to each other?
New productivity tools such as Amie are trying to solve this problem by natively inserting todo lists into the calendar experience. In Amie, every calendar event is a task that can be marked as done.
This approach is a step in the right direction, but it doesn’t go far enough. I agree that tasks should live in your calendar, but that doesn’t mean every calendar event should be a task. The way I see it, tasks are just one of many different types of calendar events. And just one of many different calendar layers.
03 Managing time in three dimensions
To make the concept of calendar layers a little more tangible, let’s look at a scenario that you have probably seen before:
What’s happening here?
1. You have a meeting with Mike A meeting is a multiplayer calendar event. It is not the same as a task. It is simply a reminder for all meeting participants that their presence is required (or desired) at a specific time and place. There is no “to do” here apart from showing up on time.
2. You need to travel to and back from your meeting To ensure that no other meetings are scheduled during those travel times, you added two “do not schedule” blocks (DNS). These are neither meetings nor tasks. Their only purpose is to avoid conflicts with other upcoming events.
The DNS blocks appear before and after the meeting, but what they really represent is one entire layer of time that stretches from 10:30 to 12:30. Your conversation with Mike is a meeting layeron top of your blocked time layer.
We tend to think of calendars as 2D grids with mutually exclusive blocks of time, but as this example shows, not all events automatically cancel each other out. Depending on their characteristics, they can be layered on top of each other. This means we manage time in three, not two, dimensions.
Let’s see if we can add another layer to the mix.
As discussed at the start of this chapter, neither blocked time nor meetings qualify as tasks — but what about talking points or agenda items that need to be covered in your meeting?
We are now looking at three different types of calendar events, each with their own unique set of properties. The problem is that our calendars treat all of these different events equally. They don’t natively differentiate between a task and a meeting even though they are two completely different things.
When I chatted with Cron founder Raphael Schaad about this issue, he pointed out another missing layer: Activities. An activity takes place for a prolonged period of time, but only requires your attention at certain points of it — not throughout.
Flights, for example, should be native calendar objects with their own unique attributes to highlight key moments such as boarding times or possible delays.
This gets us to an interesting question: If our calendars were able to support other types of calendar activities, what else could we map onto them?
What’s so interesting about this idea is not just that it introduces another unique calendar layer, but that the data of this layer is rooted in the past. In contrast to traditional calendar events, all of these Spotify entries were created after they happened.
Something I never really noticed before is that we only use our calendars to look forward in time, never to reflect on things that happened in the past. That feels like a missed opportunity.
While a Spotify layer might seem more like a gimmick than a meaningful productivity hack, the idea of visualizing data from other applications in form of calendar events feels incredibly powerful. What if I could see health data alongside my work activities, for example?
My biggest gripe with almost all quantified self tools is that they are input-only devices. They are able to collect data, but unable to return any meaningful output. My Garmin watch can tell my current level of stress based on my heart-rate variability, but not what has caused that stress or how I can prevent it in the future. It lacks context.
Once I view the data alongside other events, however, things start to make more sense. Adding workouts or meditation sessions, for example, would give me even more context to understand (and manage) stress.
Sleep is another data layer that would make a lot more sense in my calendar than in a standalone app. I already block time in my calendar for sleep (mostly as a DNS-memo to coworkers in other time zones), so why not add sleep quality data directly to that calendar event?
This way I could plan my day ahead with a lot more accuracy. Fully recharged after a solid eight hours of sleep? Block more focus time. Lack of deep sleep? Add another coffee break to the agenda.
This example is particularly interesting because it leverages all of our calendar’s time travel capabilities. It allows us to shape the future by studying the past.
Once you start to see the calendar as a time machine that covers more than just future plans, you’ll realize that almost any activity could live in your calendar. As long as it has a time dimension, it can be visualized as a native calendar layer.
Most of these data layers are pretty meaningless in isolation; it’s only when we view them alongside each other that they unlock their value. Even a Spotify layer starts to make sense when you look at it in combination with stress data (which music calms me down?), productivity metrics (which music helps me focus?), or personal activities from the past (nostalgia).
05 Closing thoughts
The takeaway of this essay is twofold:
Calendars should natively differentiate between different types of calendar events. Tasks, meetings, blocked time, and other activities should look and behave differently depending on their respective attributes.
This would open the door for a virtually infinite amount of other use cases that could be integrated into the calendar experience in the form of unique calendar layers.
These changes would not just make the calendar a stronger center of gravity in the aforementioned productivity stack, but turn into an actual tool for thought, where time serves as the scaffolding for our future plans and our memory palaces of the past.
The world’s most successful companies all exhibit some form of structural competitive advantage: A defensibility mechanism that protects their margins and profits from competitors over long periods of time. Business strategy books like to refer to these competitive advantages as “economic moats”.
One of the most citedtypesofmoats is the concept of network effects. Network effects occur when the value of a product or service is subject to the number of users. A positive network effect means that a product or service becomes more valuable to its users as more people use it.
Network effects are extremely hyped and have become a bit of a meme in recent years among tech entrepreneurs, investors and policy makers. The five largest companies by market cap in the US all seem to be built on some sort of network effect and some even go so far as to claim that 70% of tech value creation since 1994 is predicated on network effects.
While network effects can indeed be very powerful, they are also one of the most misunderstood – and in many cases overrated – concepts in business strategy. Not all network effects are created equal and the most successful ones are just a means to an end.
This essay isn’t about network effects per se. It’s about the end state that they can enable: Defaults. In the next few chapters I’ll teach you how to think in layers, explain why defensibility is really about real estate, and tell you what Salesforce and Jesus have in common (got your attention now, don’t I?).
Let’s get started.
02 Theory Primer
Before we dive into it, let’s start with a super quick theory primer on network effects.
Network effects can be broadly divided into three different categories: Direct, indirect and data network effects.
Direct network effects – as the name suggests – occur when the number of users has a direct impact on the value of a product. A telephone, for example, is worthless if you are the only person who owns one. But with every additional user who joins the telephone network, the number of people you can have a conversation with goes up and therefore the overall value of a telephone increases. The same dynamic is true for chat apps, social networks and p2p payment systems.
Indirect network effects are a little more complicated. They occur in two-sided (or multi-sided) products where the size of one user group affects how valuable the product is to another user group. Operating systems are an often cited example here: The more users an OS has, the more attractive it becomes for developers. More developers and apps, on the other hand, make the platform more valuable for users, thereby generating a positive feedback loop between the two sides.
The third category, data network effects, describes products which become better with more users via the data those users generate. Waze is a popular example of this category as are products that are powered by machine learning. More users lead to more data which lead to better recommendations or predictions and, thus, a better user experience.
There are two other concepts I would like to briefly introduce you to: Switching costs and multihoming costs. Switching costs describe how difficult or expensive it is for a user to switch from one product to another, whereas multihoming costs explain how easy or likely it is to use multiple competing networks simultaneously.
Both of these costs are not necessarily monetary – they can also be psychological or time/effort-based. The higher the switching and multihoming costs, the more defensible a network typically becomes.
What gets people so excited (or worried) about network effects is not just their defensibility though. It’s the idea of reinforcing feedback loops and the resulting exponential growth that they enable. As any paper or blog post about the topic will point out, network effects create winner-take-all-dynamics where only one or two firms end up dominating an entire industry and can’t be challenged.
But is that actually true?
03 Are Network Effects Overrated?
I have studied multiple network effect businesses for this essay and two things struck me.
First of all, it feels like many of the prototypical network effect companies are not as defensible as the literature suggests.
Today, Facebook is seen as the prime example for the power of network effects – and yet, one wonders where Facebook would be had it not acquired Instagram back in 2012. People also seem to forget about all the other social apps which Facebook has bought and subsequently shut down over the years. Why would a company that has supposedly “won the market” spend hundreds of millions of dollars to acquire smaller competitors?
Facebook and Instagram are also hardly the only social networks out there. Twitter, Reddit, YouTube, TikTok et al. all seem to be doing just fine. Are there no winner-take-all dynamics in social?
There is a similar pattern for chat apps: WhatsApp, iMessage, Telegram, Facebook Messenger and Snapchat all have at least 100 million users. Why doesn’t the market tip in favor of one or two of them as the literature suggests? Looking at the speed at which some of these apps have grown, one has to ask if network effects don’t perhaps have the exact opposite effect: Maybe they are not a defensibility mechanism for incumbents but a growth mechanism for new market entrants?
Another often cited network effect example is Google Search. Data network effects have made Google the best search engine on the planet. With each additional search query, Google’s algorithms become a little bit smarter and its search results better – which in turn attracts even more users and usage. This positive feedback loop makes Google unstoppable … and yet, Google pays Apple $15bn per year to remain the default search engine on iOS. Isn’t this a bit of a narrative violation?
Now, I’m not arguing that Facebook and Google aren’t extremely powerful companies. They clearly are. But I’m not convinced that their power is really based on network effects. Network effects might have helped them to get to where they are today, but their defensibility lies somewhere else.
The second thing that I found interesting hit me when I looked at examples of industries that had indeed achieved winner-take-all (or winner-take-most) dynamics:
If you study the list closely, you’ll notice that these businesses all have something in common. Something that the before-mentioned industries don’t have. They are based on atoms, not just bits.
Why is that important?
In contrast to bits, atoms have marginal costs. You can copy and paste a piece of software at virtually zero cost, but producing an additional piece of hardware has all sorts of marginal costs associated with it (material, production, shipping, …). This means that atom-based network effect businesses have switching and multihoming costs.
If you want to switch from Android to iOS (or use both OSs simultaneously) you have to spend at least a few hundred dollars on a new iPhone. Software, on the other hand, is typically free to use, so switching to a new chat app, social network or search engine doesn’t come with a real price tag. Because of their marginal costs, atom-based businesses have a greater lock-in effect and thus defensibility.
Another under-appreciated aspect that makesmultihoming less likely is that atom-based products have physical space constraints. You can’t have eight competing rail road networks because there simply isn’t enough real estate to build them. Similarly, most consumers wouldn’t be willing to carry around more than one smartphone with them. Your pockets also have limited real estate.
Software businesses don’t face this type of scarcity. You can install as many apps on your phone as you want. Bits make multihoming less expensive and less inconvenient.
So does that mean that network effects are only an effective moat for atom-based products and services?
Not quite, but you’ll see why the bits vs atoms distinction matters once you start to think in layers.
04 Thinking in Layers
You may have heard about the concept of “value chains” before. Value chains are horizontal visualizations of all the value-adding business activities involved in creating a product or service from start to finish. They are great to analyze traditional businesses and industries but they are not a very useful framework for tech companies.
If you want to understand the power dynamics between different platforms, aggregators and other players in a tech ecosystem, it’s better to look at them as a vertical stack with different layers.
On each layer of the stack, companies are trying to create value and to capture value. The lowest layers of the stack are typically the most powerful. If you are able to take control of a layer, you can dictate the terms of most of the value creation and value capture that is happening in the layers above you.
As a result, you see companies trying to
create layers on top of their business (everyone wants to be a platform)
move down the stack to get closer to the base layer (to increase defensibility)
What is the base layer?
The base layer is the final interface between the stack and the end user – which is typically an operating system tied to a piece of hardware. This is why atom-based network effects are so powerful: They help companies gain control of the most powerful layer of the stack.
Let’s work through a few examples to make this concept a little more tangible.
05 The Google Stack
Google started as a simple website that allowed users to search other websites. Thanks to the superiority of its PageRank algorithm, more and more users started using Google Search, resulting in better search results and thus in even more users switching over to Google Search.
Thanks to this powerful data network effect, Google was able to move down the stack. Google wasn’t “just a website” anymore, it became an aggregator that commoditized all other websites and made them layers on top of Google’s.
The data network effect, however, is not the real moat here. It’s just a means to an end. The end goal is to become a default on the layer below.
Let’s go back to atom-based network effects real quick. Earlier, we identified that they have two significant characteristics:
Marginal costs (resulting in high switching & multihoming costs)
Space constraints (resulting in high multihoming costs)
As we discussed, software doesn’t have marginal costs. The fact that you can copy and paste software at virtually zero cost has led us to a world of abundance where many things are free and infinitely available.
But what about space constraints?
Your initial reaction might be that software doesn’t have space constraints. We are talking about bits here, after all, not atoms. But if you look closely, you’ll notice that that’s not really true. Every layer in the stack has some sort of limited (pixel) real estate – and in a world of abundance, that scarce resource is extremely valuable.
Okay, back to the Google example.
If Google wants to become a default on the layer below the search engine, it must find and occupy limited real estate on the browser layer. There are billions of websites out there … but only one default: Your browser homepage.
Becoming the default browser homepage became Google’s actual moat. There can only be one homepage (no multihoming) and users are typically too lazy to change it (friction = switching costs).
As you well know, Google didn’t stop there. It moved down the stack and successfully conquered the browser layer with Google Chrome – which obviously shipped with Google Search as the default search engine. More importantly, it turned the address bar into a search box thereby melting the browser and the search engine into sort of one layer.
With Android – one of the most underrated acquisitions of all time – Google then moved even further down the stack. It now owns a significant chunk of the world’s most important base layer: the smartphone operating system. Unsurprisingly, Android ships with Chrome and Google Search pre-installed.
If you think about it, it’s kind of amazing that the largest operating system humanity has ever seen only exists to protect an advertising business two layers further up the stack.
Unluckily for Google, smartphone operating systems are not winner-take-all but winner-take-most markets. Not only does Apple have considerable market share in the smartphone market, its user base is significantly wealthier and thus more interesting to advertisers.
06 The Apple Stack
Apple is an interesting case study because its stack strategy has been so reciprocal to Google’s. It started with the base layer and then worked its way up the stack.
The first iPhone was released in 2007, but only in 2008 did the company launch the App Store. Opening up iOS to third-party developers created a new layer above the operating system: apps.
The apps layer is interesting because it created indirect network effects between developers and users, which added defensibility to the OS layer. That defensibility in turn meant that Apple was able to capture a lot of the value that was created on top of its platform. 30% to be exact.
30% value capture is great, but you know what’s even better? 100% value capture.
Over the last couple of years, Apple has increasingly launched and monetized its own apps (and services) on top of iOS. It has moved up the stack and is now competing with the third-party developers on its own platform.
Apple realized that it owns some of the most valuable pixel real estate in tech: The home screen. And the best way to monetize that real estate is by occupying as much as possible of it yourself.
The beautiful thing about defaults is that they beat almost any competing product – even if that competitor has strong network effects or is technically superior.
This is why Apple Maps has higher market share than Google Maps, why Apple Music is able to catch up with Spotify, and why Google pays Apple $15bn a year to remain the default search engine on Safari.
(I can already hear the Apple fan boys furiously typing. No, these apps aren’t pre-installed to create a better user experience. No, Apple Music is not a better product than Spotify. No, “privacy” is not a convincing argument that explains Apple Maps’ high market share.)
It’s important to point out that not all of Apple’s default apps exist to increase value capture. That might be true for Apple News, Apple Arcade and Apple Fitness, but clearly not for apps like Maps or Health. So why do those exist?
Some of them add stickiness and switching costs to the iOS platform. iMessage, for example, creates network effects that make it harder for users to switch to a competing OS layer like Android.
Other default apps – like Maps – are solely there to mitigate risk from upper layer apps that might become too powerful. You’ll remember from the Google case study that Search became synonymous with the browser. And that the browser became the real operating system on desktop computers. That’s a scenario Apple wants to avoid at all cost. So instead of taking the risk that a service like Google Maps or Spotify becomes too dominant and threatens the base layer, Apple will launch its own (default!) maps and music apps.
This also explains why Apple doesn’t allow any competing app stores, browser apps (that aren’t built on WebKit) or game streaming services. Users are also not able to add 3rd-party apps to their iOS lock screen, change their default camera app or give quick access to a payment app other than Apple Pay. All of these limited real estate defaults are fully in Apple’s grip.
Apple Pay and Wallet are particularly important for Apple because they are tied to the user’s identity – which is another important layer in the stack.
07 The Identity Layer
So far, the layers we have looked at were all pretty distinct. The Google and Apple stacks we analyzed are simply different platforms and applications that build on top of each other. But not all layers are distinct pieces of software. Some layers are blurry and don’t have a clearly marked place in the hierarchy of the stack. One of those layers is the identity layer.
Earlier in this essay, we described the base layer of the stack as “the final interface between the stack and the end user”. Operating systems like iOS or Android are great examples because there are no other layers between the smartphone and the end user.
But what if the end of the stack isn’t an interface? What if the lowest possible layer in the stack is actually the user itself?
Identity is a crucial component of almost every single layer in the stack – especially if that layer is a network (which most layers are). Every network is just a collection of nodes and if you want to build connections between those nodes you need an identity layer. Even in a pseudonymous network, each user has a consistent identifier.
The fascinating thing about identity is that it’s kind of a stack of its own. When you sign up to a new app or service, you almost always use an existing identity. Until recently, that identity was typically your email address or phone number. Both email and phone numbers are great base layers for online identity because they are (sort of open) standards.
Unsurprisingly, in their attempts to become defaults, other companies have tried to create their own identity base layers. One of them is Facebook.
Facebook is a typical network effect business: The value to its users grows as a direct result of attracting more users. On top of the core social network there are several other network effect loops including Messenger (direct network effects), Newsfeed (data network effects) and Marketplace (indirect network effects).
Facebook’s real stickiness, however, comes from another network effect. One that allowed the company to move a layer down the stack: Facebook Login
Facebook Login is a single sign-on service (SSO). It allows people to sign up to other social platforms and services using their Facebook account. SSO is more convenient for users and thus promises greater sign-up conversion rates for 3rd-party platforms. The more people use Facebook Login, the more platforms will offer it, thereby increasing the total number of accounts that a single user has connected to their Facebook identity.
As a result, it becomes really difficult *not* to use Facebook. In contrast to Google and Apple, Facebook doesn’t own an operating system and, thus, doesn’t enjoy the defensibility of a pre-installed default. But because Facebook is the de-facto online identity layer for so many people, it is almost guaranteed to secure some of that limited pixel real estate on the user’s home screen.
Unsurprisingly, Facebook is not the only company trying to become the default identity layer – dozens of companies are. Which brings us to the question of “winner-take-all-dynamics” and defensibility in the identity stack.
We started this essay with the observation that neither social networks nor messaging apps have tipped in favor of just one or two winning companies despite the strong network effect dynamics in these verticals. Interestingly, both of these product categories are also very closely tied to identity.
Instead of winner-take-all-dynamics, we are seeing not just multiple identity aggregators (SSO services) but – more importantly – many, many different identity networks building on top of them. When you sign up to a new social app using Facebook Login, your (pretended) identity in this new social app is most likely going to be distinct from your Facebook identity – which is probably the reason you are signing up to a new social service in the first place.
As I wrote in Is This Real Life?, identity is not monolithic but prismatic. Who you are on Facebook is different from who you pretend to be on TikTok. Your real name LinkedIn persona is not compatible with the pseudonymous identity you use on fringe Discord channels to shill shady new NFT projects. Google+ Circles and Facebook Lists always got this wrong: They let us change who we shared with, but not who we shared as.
As a result, we see intense multihoming in the online identity stack. There might be clear winners in certain verticals (e.g. LinkedIn for “professional networking”), but “social” as a whole is not a winner-take-all market.
That changes once we start to move down the stack and get closer to the base layer.
Like in previous examples, multihoming becomes less likely the closer we get to the world of atoms.
Your various different online identities on the top of the stack get bundled in a handful of identity aggregators. Most online services don’t offer more than three or four different SSO options because – you guessed it – there is limited pixel real estate in the sign-up flow.
Chances are you don’t actively use more than two different email addresses (one for work, one for personal life). Your phone number is directly tied to a physical device which makes multihoming unlikely because of the aforementioned multihoming costs and physical space constraints.
Once we get to the base layer of the stack – your real life identity – multihoming becomes virtually impossible. Unless you are Jason Bourne or suffer from severe schizophrenia, you only have one identity in real life.
The most interesting identity defaults to occupy are therefore all the interfaces between your real world identity and your digital self. For example, your driver’s license (see Apple Wallet), your payment details (see Apple Pay), or your medical records (see Apple Health).
All of these touchpoints represent sources of truth – which brings us to the last but most important kind of network effect default: Intersubjective realities.
It’s like “pics or it didn’t happen” but for sales people. A Salesforce entry is a timestamped proof point that a customer interaction or sales deal has actually taken place. It makes Salesforce an important source of company data. A system of record.
Salesforce is of course not the only available data source within a company. There are dozens of internal BI dashboards, spreadsheets and third-party software tools that track customer and revenue data. All of them are accurate in their own unique way, but they all tell you something slightly different.
A company can have multiple sources of data, but it can’t have multiple sources of truth. There is no multihoming on the truth layer. You need a *single* source of truth. A default.
Salesforce is not more accurate than any of the other company data sources. It is not objectively better. But it becomes intersubjectively better if enough people agree that it should be the single source of truth.
“If it’s not in Salesforce, it doesn’t exist” is a reinforcing flywheel. A social network effect. The more people believe in it, the truer it becomes.
This is Salesforce’s core defensibility. You can replace a CRM system, but can you replace “the truth”?
Intersubjective realities like being the default source of truth are not just intra-company network effects.
For many decades, a popular intersubjective reality was that “nobody ever got fired for buying IBM”. Salesforce feels like the 2021 version of that belief – and they are not the only company that fits that description.
I sometimes joke that every time I open Workday I’m reminded that we live in a simulation. Because getting to a $50bn+ valuation with a product as horrible as Workday’s is the best example we have of a glitch in the matrix.
Kidding aside, Workday’s real moat is, of course, the same as Salesforce’s and IBM’s. Nobody ever got fired for buying Workday. It’s the default HR tool of choice.
Of course, intersubjective realities don’t just lead to suboptimal product defaults like Salesforce or Workday. These are just extreme edge cases to prove my point. Most of the default software products of today (think AWS, Stripe, Notion) became defaults because their products are just really good. But even great products need shared beliefs to become defaults.
Beliefs are the world’s most powerful network effects. They don’t just explain the success of software products. The price of Bitcoin, democracies and dictatorships, capitalism, and the power of the church are all based on intersubjective realities. No moat is more durable than a shared belief.
See, the real base layer of the stack isn’t an interface. And it’s not our identity either. It’s our collective minds.
Like other defaults we analyzed in this essay, intersubjective realities are able to capture a scarce piece of real estate. The most valuable real estate there is: Mind space.
09 Closing Thoughts
So where does this essay leave us?
Some people will say that I oversimplified some network effect concepts and that a couple of my arguments are pretty speculative.
They are not wrong.
I very deliberately downplayed the importance of certain network effects and took a few shortcuts to make the narrative work. But as I pointed out at the beginning, this post isn’t really about network effects. It’s about thinking in layers.
This essay doesn’t end with a definitive answer to the “Are network effects overrated?”-question, because there is none. Some are overrated. Some are underrated. Some are rated just right. It really depends on where in the stack you are.
Some people will say that this essay is a missed opportunity: “How can you write this essay without mentioning crypto even once?”
They are not wrong.
Decentralized protocols would make very interesting base layers and I’m particularly excited about web3 in the context of identity. Does crypto solve the problem of suboptimal defaults? Well, that’s a different question and I’m honestly not sure. Just because you have decentralized base layers doesn’t mean you won’t see centralization, aggregation and default rent seeking on top of them – no matter how you design them. Greed is like water: It always finds a way through.
I haven’t written about crypto in this essay because I’m still trying to wrap my head around it. It probably deserves a post of its own at some point.
Some people will say that intersubjective realities aren’t real network effects. They’ll say that I’m stretching the definition of network effects too far.
I think they are wrong.
I understand that the idea of social network effects is even more intangible and immeasurable than the already vague concept of “traditional” network effects – but it still surprises me how many people refuse to see beliefs as what they are. Not only do I think that beliefs are network effects, I think they are probably the most important and most defensible of them all. Perhaps this intersubjective reality just hasn’t reached its critical mass yet.
It’s an exciting product. And yet, in these last 77 days, I have actively used Clubhouse exactly three times.
The problem with Clubhouse is that you can only listen to conversations live as they happen. Given that the majority of the current user base is in North America, the most interesting conversations usually happen in the middle of the (European) night when I’m asleep.
The first thing I see on my phone after I wake up are a handful of Clubhouse notifications telling me about the all the interesting conversations I missed. I wish I could just download these conversations as podcasts and listen to them later.
Some have pointed out that the live nature of Clubhouse is exactly what makes it so special, comparing it to the ephemerality of Snapchat. And while I disagree on the Snapchat comparison (ephemerality ≠ synchronous creation and consumption), I do think it makes sense for Clubhouse to find its own native format rather than compete with podcasts directly.
While Clubhouse feels like live podcasts at the moment, I think over time it will probably evolve into something else. Something more unique.
The current state of Clubhouse reminds me of the early days of Twitter: People knew it was a unique new form factor, but they didn’t know how to use it yet. Most tweets were just short status updates. It took some time until the platform found its current form and use cases.
One of those use cases is “Twitter as a second screen”: Live commenting TV shows and sports events. I strongly suspect that this will become one of Clubhouse’s main uses cases as well.
As I pointed out in Airpods as a Platform, I see audio primarily as a secondary interface: You listen to music while you’re working out, for example. You consume podcasts while you are driving or commuting. You talk on Discord while you’re playing a game.
Audio is a medium that supports and augments other activities.
So instead of thinking about whether Clubhouse should make conversations available as downloads, a perhaps more interesting question is what activities could best be augmented with live audio? What does Clubhouse as an audio layer for other content look like?
The most obvious use case seems to be sports (and other events that have to be consumed live). I would love to replace the audio track of my TV sports broadcasters with a select group of (Clubhouse) experts whose opinions I’m actually interested in.
If you’ve been following this blog for a while, you probably know by now that one of my favorite topics to think and write about is “status signaling”.
Signaling explains most of our everyday actions: what clothes we wear, which universities we pick and which religion we subscribe to. Everything has a hidden signaling component with which we communicate our desired tribal affiliation.
In Signaling-as-a-Service, I described the implications signaling has on the monetization of software businesses. For many traditional industries, monetizing the display of status is not a new concept. A Rolex watch, for example, is not better at telling the time than a cheap Casio watch. But a Rolex reveals something about its owners’ wealth and, thus, their status in society. It’s that status message that explains the difference in price.
Similarly, driving a Prius says something about your views on climate change. A Make America Great Again cap reveals something about your political affiliations. And Nike athletic wear signals a healthy, pro-active lifestyle.
Software is at a crucial disadvantage compared to these physical products because of its intangibility. A fitness app also signals a healthy, pro-active lifestyle, but no one can see it because it only lives on your phone. Everyone can see your Nike gear whenever you wear it in public. Software can’t offer the same benefit. It doesn’t have a signaling distribution channel.
This is why there is no software equivalent of a Rolex watch or a Louis Vuitton handbag. People aren’t willing to spend money on things other people can’t see they spent money on.
But it doesn’t have to be that way. As software is eating the world, the lines between physical and digital products are becoming increasingly blurry. As I have pointed out in my original essay, one way for software companies to solve the signaling distribution problem is to add a physical element to their software product.
In this post I want to explore this idea a little bit further. Specifically, we’ll look at neobanks – and their opportunity to monetize credit card signaling.
In the last couple of years we have witnessed the birth and rise of a new startup vertical: neobanks.
Neobanks differ from traditional banks in two ways:
1) Rather than relying on a physical branch network, the entire banking experience is managed via an app
2) Instead of the “how do you do, fellow kids”-cringiness that ad campaigns of traditional banks usually invoke when they try to appeal to a younger audience, neobanks are actually perceived as cool. In fact, many of them feel more like lifestyle brands than banks or tech companies.
Interestingly though, neobanks still use one very traditional banking element: physical cards.
At first glance, this might seem counterintuitive. If you are building a mobile-first bank, why not offer virtual cards and let users pay with their phone? Why go through the hassle of producing and shipping physical cards?
The answer is – you guessed it – signaling.
Think about it: Paying for things (offline) is a social activity. It’s an interaction between at least you and a cashier or waiter. But ideally, in a dinner scenario for example, you are surrounded by other people you want to impress: a date, a group of friends, or work colleagues.
This makes the moment you take out your card to pay the bill a great opportunity to make a statement and build social capital.
If you look at neobanks out there today, it’s pretty obvious that signaling is in fact one of the main benefits they offer – and almost the only thing they monetize (apart from interchange, of course):
The premium subscriptions neobanks offer usually don’t win on features but solely on nicer looking cards. The N26 or Revolut Metal plans, for example, don’t offer any additional features that really justify the ~€15 / month price tag. They do include a nice looking metal card though – that’s what people pay for.
Relatedly, it seems like most of the innovation in the industry is happening in card design. The actual banking products are more or less interchangeable, what differs is whether the card comes in titanium, wood, or glow-in-the-dark yellow.
You may have noticed that the credit card number has moved from the front to the back of the card. This makes it easier for users to share photos of their cards on social media as an additional signaling distribution channel.
Neobanks are popping up like mushrooms after the rain at the moment and it’s unlikely that this trend will end any time soon. Thanks to banking-as-a-service providers, we’ll likely see a lot of non-banking-companies add banking functionalities and cards to their product offering.
There’s an old Twitter joke that every app evolves until it eventually becomes a chat app. The 2020 version of that joke is that every app evolves until it eventually becomes a bank.
When I did some research on credit card designs recently, I was surprised by the sheer amount of different neobanks already in existance. And yet, even though almost all of them offer well-designed cards, it’s shocking how similar they all are. It seems like all of them are focusing on the same target audience instead of differentiating their signaling messages.
Let me explain.
03 In-Groups, Out-Groups and Artificial Scarcity
In every signaling scenario there are two possible target audiences: An in-group and an out-group.
The in-group is the tribe you want to join and signal your affiliation to. The out-group is everyone else – people you want to distance yourself from.
It’s important to note that signaling in iMessage is limited to the in-group since these color codes are only visible for iOS users – Android users can’t see who in the group is using which operating system.
Signaling, however, grows stronger the larger the out-group is – as long as the out-group knows about the in-group. This is why luxury car manufacturers deliberately extend their advertising campaigns to people who will never be able to afford their cars: they are increasing the size of the out-group by educating people about the in-group.
At the same time, brands need to control the size of the in-group. The more exclusive the in-group, the higher the signaling strength and, thus, the monetization potential of a customer.
The easiest control mechanism for the size of the in-group is price. If you set the price high enough, few people will be able to afford the product. Ironically, this, in turn, justifies the high price.
Alternatively, companies create artificial scarcity by setting a hard number on supply. Limited supply creates FOMO and hype which increases the size of the out-group and results in higher social status for members of the in-group. Artificial scarcity explains the price of Bitcoin, Pokémon trading cards and why people spend hours queuing in front of Supreme shops.
The problem with keeping the in-group small is that it also limits the number of potential customers and, thus, overall revenue potential. Companies need to walk a fine line between maximizing the number of customers while simultaneously maximizing the number of people they can (afford to) exclude.
04 What Neobanks Should Build
The problem with neobanks today is that they all focus on the same in-group. Here are the premium cards of some of the largest European challenger banks – notice any difference?
It seems like everyone is trying to become the Apple of Banking – including Apple itself. The signaling messages are all about displaying economic power.
But we are slowly seeing new banking apps that are focusing on different audiences. For example, there are now a handful of“green”neobanks that help users signal environmental altruism.
The question for these companies and their investors is whether the in-group is big enough to justify building an entire bank around it. Making the unit economics of a bank work requires a certain amount of users, but as we discussed earlier, the signaling strength decreases as the size of in-group increases.
So how do you solve this problem? By introducing multiple in-groups.
See, the current model looks like this:
(In fact, given how undifferentiated most of the offerings and cards are, there are actually multiple neobanks within the same small in-group bubble.)
But what if one bank would target multiple, different in-groups?
For example, what if N26 had dedicated cards for soccer fans, hip-hop enthusiasts and gamers? Instead of focusing on just one signaling audience, their total addressable market would massively increase.
The way they would target these audiences is via brand collaborations. N26 does not have the necessary reputation in any of the above-mentioned areas to build credible signaling messages. It would not be able to build attractive in-groups on its own – but other brands could lend N26 their social capital.
What would N26 x Manchester United look like? Or Chime x Supreme? Or Revolut x 100 Thieves?
Because of a bigger target audience overall, individual in-groups could be kept smaller. Cards could be released as limited edition drops transforming them into collectibles, which would justify a higher price tag per card.
It’s worth noting that different signaling audiences are not mutually exclusive. We don’t just subscribe to a single in-group – our identities are prismatic. This means that some users might purchase multiple cards to signal to different in- and out-groups resulting in even higher expected LTV per user.
A few neobanks have already started to experiment with brand collaborations and limited edition cards (see Cash App x Hood By Air or Point x Laura Berger). I expect there will be a lot more once neobanks realize what most of them really are: Signaling-as-a-Service companies.
And if you’re an (aspiring) 3D artist or designer: Would you like to bring some of these card ideas to life? There will be a follow-up post to this essay featuring the best card mock-ups. Send me a Twitter DM or email (hello at julianlehr dot com) if you want to learn more.
Twitter launched their version of Stories last week (called Fleets) – some initial thoughts:
I think the Stories format fits Twitter better than any other social network because it’s actually quite similar to how Tweets work. Both Stories and Tweets are modularized content. They work as stand-alone micro content (Tweet / Fleet) or can be grouped into a bigger piece of content (Tweet storm / Story) with sub-discussions for each element.
The difference between the two formats is that Tweet discussions are public whereas Fleets will drive more usage of private discussions via Twitter DMs. This is a good thing. Twitter DMs are the most underrated part of the site (and probably the best shot any company has at disrupting LinkedIn). I just wish Twitter had improved DMs before driving more users to it.
When Instagram launched Stories, it saw that users posted less to the newsfeed – which they reserved for their best / most important photos. I wonder if we’ll see a similar trend on Twitter, but I doubt it. Tweeting photos and videos was never a great user experience, mainly because of the weird way Twitter auto-crops them, so I don’t think we’ll see cannibalization between the two formats.
The animation when swiping between Fleets feels clunky. Instagram Stories feel 10x smoother.
The creator tools for Fleets are by far the biggest disappointment. Twitter had a real chance to build something new here (personally, I think audio would be a *really* interesting format). Instead, it’s just a very limited version of other Stories features.
The Stories bar is great UI real estate for other features: I really hope Periscope will make a comeback. The rumored audio rooms would also fit nicely here.
I’ve been thinking a lot about corporate knowledge management systems recently.
If we think about a company as an organism, then a knowledge management system is essentially the (collective) brain that keeps that organism alive and running. A corporate knowledge management system should contain every single bit of codifiable information within the company resulting in a library of all projects, processes and procedures.
In an ideal state, it is the single source of truth that helps to inform every individual in the firm about what everyone else is up to. Information should be easy to add (input) as well as easy to search and find (output) resulting in quick knowledge transfer between different employees.
In reality, however, this hardly ever is the case. As anyone who has ever worked at a larger company can attest to, company knowledge bases always end up being a huge mess.
What starts with a neatly organized Confluence wiki, over time morphs into a multi-headed monster consisting of millions of notes and documents that live across Google Docs, Dropbox Paper, Asana and half a dozen different wiki tools. Most docs will be outdated, some will contradict others and the one you are really looking for only shows up on page 14 of your search results.
It seems like things usually start to fall apart once a company surpasses the Dunbar number of 150 employees. This is probably when people start to realize that all the different documents of explicit knowledge they were amassing over the years have been held together with implicit knowledge.
It’s easy to find – and understand – the right documents when you know every other person in the company, but once you’re past that point, you need a system to organize all the data so that people can make sense of it.
The idea behind tools like Notion is to solve this problem by using just one tool for all your different knowledge documents. Instead of Google Docs AND Asana AND Trello AND Airtable, you just do everything in Notion. This reduces complexity because you don’t have to switch and search across different apps. At the same time, Notion forces you to think about a system that makes information easy to find with its folder-like structure and links between different databases.
I’ve never used Notion with more than half a dozen people myself, but from what I’ve heard from people at larger companies, Notion knowledge bases also don’t scale very well beyond a certain number of users. Once too many people start contributing to it, things become bloated and unnavigable.
A friend at Stripe recently suggested – half-jokingly – that we should hire a librarian to organize all our internal data and documentation. The more I think about it, the more I like the idea. Perhaps every company should hire a Chief Notion Officer once it hits 100 employees?!
An alternative approach to Notion is a knowledge management system that can live across different tools and without active manual curation because it’s based on really powerful search. The folder structure of your Google Drive, for example, doesn’t really matter because looking up documents via search is faster and more convenient. Meta search tools like FYI are supposed to offer the same but across different productivity tools.
But again I’m skeptical that this really works beyond a certain amount of users (and thus documents). I remember even Google’s internal search engine doing only a mediocre job of surfacing the most relevant documents (and even if it did you weren’t sure if there wasn’t a better or more up-to-date version of it).
I’m sure we’ll get there eventually, but until then we probably need a mix of automated search and manual human help – which is where Slack comes in. I’ve always thought Slack plus Notion plus Spoke would make a really powerful product (and I’m surprised Slack hasn’t made any major acquisitions in this space).
If you think about it, Slack is basically a search engine powered by humans: Most Slack messages are just questions. It’s 911 for when everything else fails. So if Slack had access to your entire knowledge base, it could answer at least the most commonly asked questions automatically. The rest would still get answered manually by the channel participants. Or your Chief Notion Officer.
In his bestselling book Sapiens, Yuval Harari argues that humans became the dominating species of planet earth because we are the only animal that can cooperate in large numbers. This, he claims, is due to humans’ ability to believe in purely imaginative things and concepts. A company like Google, for example, doesn’t really exist. Sure, there’s the Google.com website and physical Google offices with real Google employees – but the idea of Google as a company is just a fictional concept. It only exists because multiple people believe in it. The same is true for legal systems, nations, religion or money. Every large human cooperation system is based on a fictional idea that only lives in our collective minds.
What Harari doesn’t discuss in his book is the extreme other end of this cognitive ability: Conspiracy theories. I’ve been fascinated by Jon Glover’s recent essay on QAnon, in which he compares conspiracy theorizing to alternate-reality games. Participating in QAnon conspiracies, he says, feels like playing a real-life multiplayer game based on secret insider knowledge.
Social media has made conspiracy theorizing so addictive and immersive that the line between story and reality can become incredibly blurry.
“A lot of these groups are like cults […] They have beliefs that border on religiosity … And when you contradict them, it’s like telling them Jesus isn’t real.”
The religion analogy is interesting because it’s a perfect example of why fact checking as a countermeasure is useless. Google, Facebook & co have all introduced fact checks and fake news labels to combat conspiracy theories. It’s naive to think that they will work.
Think about it: Science (which, you could argue, is also a form of fact checking) has been around for centuries trying to debunk most religious beliefs – and yet religion still plays a major role in Western society. If entire education systems teaching millions of people about science haven’t worked, why do you think adding a small fact check disclaimer below a YouTube video would?
It’s worth pointing out that science is also just another belief system. We laugh about flat earthers, but how many people can actually explain why the world is round in a scientifically correct way? Most of us don’t know science, we believe in science.
What should give us hope though is the fact that many people believe in *both* science and religion despite their contradictions. This means that multiple realities can co-exist even when they are at odds with each other.
We don’t live in just one reality – we switch between different realities (and play different characters within them). It’s a bit like Westworld, where guests can explore different theme parks: Westworld, Shogunworld, Warworld, etc.
Similar to Westworld, it’s increasingly becoming more difficult to distinguish between what’s real and what isn’t. As Aaron Z. Lewis points out in his brilliant essay You Can Handle the Post-Truth, we have created a fragmented reality with hyper-realistic CGI influencers, bots, deepfakes, AI pretending to be humans and humans pretending to be AI. We don’t live in a single timeline with a single history, but in a variety of “contradictory reality bubbles“.
Bruno Maçães paints a similar picture in his excellent book History Has Begun. America, he believes, is in the process of transforming into a new, post-liberal society, distinct from current Western civilization. It’s a society that has not only been heavily shaped by television but one where reality and fantasy overlap.
This transformation has been in the making for a while: Kennedy had the aura of a movie star and leveraged his image through the medium of television. Nixon created the first political soap opera with the Watergate scandal. And with Reagan an actual movie star moved into the White House.
Trump is the ultimate culmination of this trend. His entire presidency feels scripted. His tweets end with cliffhangers. A House of Cards screenwriter would not have been able to come up with a better story.
Reagan and Arnold Schwarzenegger used the social capital and entertainment skills they acquired as actors to appear more likable and competent as politicians, but at least they tried to be politicians. Trump, on the other hand, uses politics as another stage for his acting performance.
“Americans see the world as an action movie” Maçães writes. I think this became especially apparent during the current covid-19 crisis and the most recent wildfires in California. People in my social media timelines seemed only superficially worried. Instead, their posts contained an underlying sense of excitement about real life finally catching up with the science fiction aesthetics of Blade Runner and Akira.
Perhaps this is Hollywood’s greatest achievement: It gets us excited about our dystopian future. The world might be ending, but at least it’s an ending that’s entertaining to watch.
If Hollywood created the fantasy worlds that reality is catching up with today, who is creating the fantasy worlds of tomorrow?
Maçães thinks the answer is Silicon Valley, which he describes as “a fantasy land where engineering talent and capital come together to power the serious project of creating new worlds out of nothing”. It’s one of the most idiosyncratic descriptions of how startups work that I have read. VCs are the new Hollywood studios; founders are the directors and actors.
A founder’s job is essentially to create the most compelling narrative of what their company will look like in 10 to 20 years time. It’s not lying, it’s telling pre-truths. Being contrarian just means that you came up with a novel fantasy plot no one else had thought of yet.
Sometimes founders are able to re-create the fantasy narratives of their pitch decks. Sometimes you end up with Theranos.
And even when you do end up with Theranos, at least you get material for an exciting new Netflix series. Perhaps VCs should buy the movie rights to the startups they invest in as a hedge against their biggest portfolio failures?
The concept of the tech industry as a creator of fantasy worlds immediately reminded me of a conversation I had with my friend Max recently. His theory is that it’s not the lack of tech talent or venture capital that explains why Europe hasn’t been able to create a tech ecosystem on par with the US. It’s the absence of religiosity that has kept Europe from creating its own Google or Facebook. The US is able to create larger companies because it’s able to believe in larger and more ambitious narratives.
Silicon Valley is not just creating new fantasy worlds, it is building tools that allow others to create their own fantasy worlds. Enter social media.
If TV has taught us to think of ourselves as characters in the story of our lives, then social media has allowed us to actually write and edit the script and build fictional characters. Social media is essentially the democratization of virtual world building.
As I wrote in Signaling-as-a-Service, Twitter, Snapchat and Facebook are just massive virtual status arenas that allow us to build social capital through signaling. Some of that social capital might be built on top of real stories and actual achievements, but most of it is not based on reality. Every time you are applying an Instagram filter, you are already changing reality.
It’s not just that we bend reality in our social media narratives, we also play different characters. As Chris Poole already pointed out years ago, we all have multiple (online) identities. There is not just one reflection of yourself – identity is prismatic. Twitter-Julian (armchair intellectual) is not the same as Instagram-Julian (hobby photographer) or Facebook-Julian (high-school drinking buddy). Google Circles and Facebook Lists always got this wrong: They let us change who we shared with, but not who we shared as.
This is why social networking is not a winner-take-all market. We need different channels for our different, contradicting online personas.
The problem is not that we live in multiple realities or that these realities are sometimes at odds with each other. What’s problematic is that we sometimes get so immersed in one virtual world, that we forget about all the other realities – which brings us back to the problem of online conspiracies.
This was originally supposed to be a blog post about Hey. I wanted to write a longer essay about Basecamp’s new email tool and test if the app actually lives up to its hype.
After playing around with it for a few weeks, my conclusion is this: Hey’s most interesting aspect is not its radical approach to email – but its fresh approach to note taking!
We have long treated notes as a distinct silo in our productivity stack, when we should have integrated them right into our workflows instead. While email might need an overhaul, I see a way bigger opportunity in rethinking digital note taking.
So instead of my Hey review, let’s talk about notes and my idea for a radically new kind of note taking app.
02 A Closer Look at Notes in Hey
Hey has two interesting notes features.
The first are so-called Thread Notes. These are basically emails to yourself within an email thread that only you can see. You might have seen similar internal notes features in shared inbox tools like Zendesk or Front. Thread Notes in Hey are effectively the single player version of those.
I’d find Thread Notes super useful in combination with snoozed emails: “Show me this email again in [insert time] and remind me of [insert note]”
This feels like a way better workflow than adding a note in a separate reminder, to-do, CRM, or note taking app.
a) Because there’s no need for context/app switching. b) You might not even remember that you took a note related to an email when it resurfaces in your inbox a few weeks later.
To-do and reminder apps (and calendars!) work great for tasks that are tied to a specific day or time. But many tasks – and especially notes – are not dependent on time. Their relevance is based on other trigger points. Only when certain conditions are met, should these notes resurface: “If [insert event] is true, then show [note]”
In the case of our email, the note becomes relevant in [insert snooze time] or whenever the recipient replies to the email thread. The fact that many tasks have external dependencies (which are usually linked to an email thread) is one of the reasons I believe that your email inbox should also be the place where you manage your to-dos. You shouldn’t need a separate to-do app.
As the name suggests, these notes are added to individual emails in your inbox. Similar to Thread Notes, you can use them to quickly jot down things you need to remember, but they also help you to highlight specific emails.
Thread Notes and Inbox Notes feel similar, but they serve two slightly different use cases. Thread Notes work more like reminders (“Don’t forget X when you reply”), whereas Inbox Notes feel more like bookmarks that highlight the most important messages in a long list of emails.
Together, they remind me of one of my all-time favorite note taking tools: Post-it Notes.
03 Post-it Notes
I’m a huge fan of physical note taking and there are two writing tools that I use every single day: A physical notebook (for longer thoughts, including first drafts of my blog posts) and post-it notes (for all kinds of quick notes).
(Disclaimer: When I say “post-it notes” I’m referring to all types of sticky notes, not just those sold by 3M.)
Post-it notes serve two of the same functions that Hey’s note features offer: highlights and reminders.
One of the reasons I still read a lot of non-fiction in physical book form is because it’s easier to bookmark and annotate passages that I quickly want to find again later. Similar to Thread Notes, sticky note bookmarks help me highlight the most important items in a long list.
Apart from helping you find important passages in a book later on, sticky note bookmarks also allow you to add additional context to the section you highlighted (e.g. *why* you bookmarked a particular section or thoughts you had about it).
You could write down notes like this in a separate notebook, but then you’d lose the connection to the source they are based on. What makes post-it notes so interesting is the spatial relationship between the notes and their respective context.
It’s this spatial relationship that also make post-it notes great reminders.
Post-it note reminders are similar to Hey’s Thread Notes in that they are triggered not based on time but on events that don’t have a (forecastable) deadline. They are essentially like notifications that appear when you look at specific objects.
A post-it note on your front door, for example, is like a notification that pops up when you’re about to leave the house: “Before you go, don’t forget to [insert note]”. A shopping list on your fridge is a data request notification that surfaces when you are most likely to have new items to add to your list.
Together, post-its essentially become a notes layer that augments the real world. Instead of a physical notebook that lists all your notes and tasks in chronological order, post-it notes are scattered around your house but tied to specific places or objects where they are most relevant.
The question is: Why isn’t there a digital note taking tool that works like this?
04 A Spatial Note Taking Layer
There are dozens of great note taking apps out there: Evernote, Google Keep, Apple Notes, Workflowy, Notion, Roam … the list goes on and on. Every one of these tools has its own unique angle on note taking, but they all have one thing in common: They are stand-alone apps.
This strikes me as suboptimal. Neither the creation nor the consumption of notes should be treated as separate workflows.
As John Palmer points out in his brilliant posts on Spatial Interfaces and Spatial Software, “Humans are spatial creatures [who] experience most of life in relation to space”. Post-it notes are so powerful because they have a spatial relationship to their context.
Many notes shouldn’t live in a dedicated note taking app that you explicitly have to open and search. Notes should emerge automatically whenever and *wherever* they are most relevant.
As long as note taking remains separated, users constantly have to switch back and forth between different applications, which is not ideal. It reminds me of the recent discussion around productivity and collaboration – which have historically also been treated as two separate, isolated workflows:
The platonic flow of productivity should minimize time spent not productive, with collaboration as aligned and unblocking with that flow as possible. By definition, any app that requires you to switch out of your productivity app to collaborate is blocking and cannot be maximally aligned. It’s fine to leave your productivity app for exceptions and breaks. But not ideal when working.
The same applies to notes. You shouldn’t have to switch apps and context to take or consume notes. It should stay within the same workflow!
(Side Note: You could argue that note taking is essentially single-player collaboration where you communicate with your future self – but that’s a whole new discussion I’ll save for another blog post.)
Natively built in note taking features like email notes in Hey feel like a good step in the right direction – but email is just one distinct silo in your productivity stack. Imagine you had to buy different sets of post-it notes for every single room or object in your house.
What we need instead is a spatial meta layer for notes on the OS-level that lives across all apps and workflows. This would allow you to instantly take notes without having to switch context. Even better yet, the notes would automatically resurface whenever you revisit the digital location you left them at.
Let’s look at a few examples.
One use case that immediately came to mind when I thought about spatial notes is bookmarking.
Most of us don’t use just one bookmarking app for everything. We use different bookmarking apps or bookmarking features depending on the type of object we want to save for later: Podcasts are usually saved in a dedicated podcast app, for example. Articles are bookmarked in Pocket, books on Goodreads, songs on Spotify, places on Foursquare, products on Amazon … you get my point.
Bookmarks are great to remember *what* you want to revisit later – but not *why* you saved something in the first place. I would love to be able to add notes to my bookmarks directly in each app so that I have some context on why these objects are important when I return to them later.
Ideally, these notes wouldn’t just show up in the one place I originally left them, but across all apps and websites that reference the (semantic) object I bookmarked. A note attached to a book I want to read in Goodreads, for example, should also emerge when I see that book in my Amazon search results – or when someone mentions it in my Twitter timeline.
People are a similar type of semantic object you could tie notes to. Instead of a stand-alone CRM tool, you would leave a note attached to a person straight from your current workflow (e.g. your email client). That note would then automatically re-surface whenever the person it references becomes relevant again:
When you’re in an email thread with them
When you add them to a calendar event
When you’re visiting their LinkedIn page
When you look them up in your phone book
Another use case for spatial notes are instructions on how to use specific software features or improve workflows. These could be quick reminders to add permissions to new calendar events or to use Filtered Views in Google Sheets. You could also use these notes to train users on keyboard shortcuts.
You could imagine employers shipping corporate laptops with pre-installed notes to make it easier to transfer (previously tacit) knowledge and thus improve the onboarding process for new hires.
06 Closing Notes
I could go on and on about potential use cases for a spatial note taking app. The possibilities are endless – but blog posts shouldn’t be. So I’ll end things here.
A final note before you leave: I’d love to hear your thoughts on this whole idea. What would you use a spatial note taking tool for? Let me know what you think in this Twitter thread!
When I say proof-of-work, I’m not talking about consensus algorithms like the ones that some crypto currencies use. I’m talking about social networks.
At their core, social networks are primarily about one thing: Building social capital through signaling. As I wrote in Signaling as a Service, signaling can be broken down into three different components:
Signaling Message A hidden status subtext you’re trying to convey about yourself
Signaling Distribution The channel through which you’re communicating your signaling message
Signaling Amplification Ways to boost your signaling message to compete against status rivals
For example: A Patagonia vest signals both a prosocial attitude (“I care about the environment“) as well as wealth (“I can afford to spend $500 on a jacket“). Depending on where you live, it might also signal something about your occupation.
In order to signal these messages to others and build actual social capital you need a signaling distribution channel. One option would be to wear the vest in public where others can see it – but there are obvious physical constraints to the size of the audience you’d be able to reach.
This is where social networks come in.
Their primary role is to distribute signaling messages at scale and transform them into quantifiable social capital (in the form of likes and followers).
As social networks grow, they increase the potential reach of your signaling messages – but they also get crowded with status rivals. This is why social networks typically provide you with a set of signaling amplification tools. These tools help you boost your signaling messages and stand out from the crowd.
Almost every social network of note had an early signature proof of work hurdle. For Facebook it was posting some witty text-based status update. For Instagram, it was posting an interesting square photo. For Vine, an entertaining 6-second video. For Twitter, it was writing an amusing bit of text of 140 characters or fewer. Pinterest? Pinning a compelling photo. You can likely derive the proof of work for other networks like Quora and Reddit and Twitch and so on. Successful social networks don’t pose trick questions at the start, it’s usually clear what they want from you.
But the more I think about it, the less I like the comparison. I actually think that Eugene’s proof-of-work theory only scratches the surface of what social networks actually do.
Let me explain.
02 A closer look at proof mechanisms
Take a look at this very cliché Instagram picture. The photographer clearly put a lot of thought and effort into its composition and applied different filters and editing tools to make it look nicer.
It’s a perfect example of Eugene’s definition of proof-of-work. Proof-of-creative-work, to be more exact.
Editing your photo helps to amplify your signaling message and sets you apart within Instagram’s status arena (aka the newsfeed). It also adds additional signaling messages to your post: “Look how great a photographer I am” or “I’m a creative person”.
But those are not the main signaling messages you are communicating here. What you really want to tell your followers with this photo is something along the lines of “I’m a world-traveler” and “I’m in a happy relationship”(which in turn are also just signaling proxies for wealth and mating worthiness).
The photo and the location tag are your proof points.
Social networks are therefore not only signaling distribution (and amplification) networks – they also allow users to prove their signaling messages.
The creative proof-of-work is just pretext and helps to boost your post. What’s more important are the additional proof mechanisms that social networks provide. In the case of Instagram those are photos and location tags.
Instagram is essentially “pics or it didn’t happen”-as-a-service.
03 Implications for new social networks
When new social networks emerge they have to introduce new proof mechanisms to differentiate themselves from existing incumbents. These can either be novel proof-of-creative-work hurdles or completely new proof-of-x mechanisms.
TikTok is a good example for proof-of-creative-work innovation. The app provides creators with a powerful set of video editing tools that have opened a whole new level of creativity.
The cost to participate in TikTok’s status game is a lot higher than Instagram’s (compare a well-made dance choreography on TikTok to your median Instagram travel post) – but its powerful feed algorithms also make discovery easier and thus reward users faster and with more social capital.
TikTok doesn’t add any new proof points beyond its novel creative work hurdle though. You can signal and prove your creativity but you could achieve the same by uploading your video to Instagram.
Strava, on the other hand, introduced an entirely new proof mechanism: Proof-of-physical-activity. By using your phone’s GPS sensor (or a 3rd-party fitness tracker), users can actually prove how much and fast they ran or cycled. In contrast to Instagramphotos, Strava’s proof mechanism is a lot harder to fake (though there are exceptions).
What’s great about Strava is that it reinforces a behavior that’s actually good for you: While the status game that initially got you into the app might be zero sum, the actual physical exercise you have to put in to compete has a very positive, compounding effect.
The question is: What other social networks should we build that could have similar positive feedback loops? And what are their proof mechanisms?
I love the idea of a Strava for Cooking – but I’m very skeptical that it can be built. Why? Because the necessary proof mechanisms don’t exist.
The primary metric you optimize on when cooking is taste. But how would you measure or quantify taste? The closest proxy to taste that we have is optics: How good does the meal that you cooked look? This can easily be proved with a photo .. but that’s a proof-of-work mechanism that Instagram already offers (including filters to make your food look nicer). As long as no one comes up with a better proof mechanism for cooking, I think it’s unlikely that we will see a successful social network in the space.
I’m more optimistic about Strava for Learning.
While the activity of learning itself might be hard to quantify, you can measure the outcome of learning: knowledge. Has anyone built a multiplayer version of Anki yet? Flash cards would be a perfect proof-of-knowledge mechanism and could easily be turned into a game where you compete against friends.
A related product I’d love to see is Strava for Reading. Imagine an eBook reader that not only tracks how much time you spend reading but also *what* you are reading. Based on these proof-of-(reading)-work mechanisms you could build streaks or GitHub-contributions-like visualizations that incentivize users to read more (and more regularly).
You could even build leaderboards for different topics based on the content of the books and articles you read. Or think about a score that indicated how balanced your reading behavior per topic was (to incentivize users to read takes on political topics from different perspectives).
(Side note: Amazon’s monopoly on books might be the most underrated sub-optimal equilibrium in tech.)
Another app that would be interesting is a social investing app. Think “Robinhood but as a social network”. It seems like investing is already quite a social activity – just look at communities like r/wallstreetbets. As patio11 pointed out, Robinhood already feels more like a game than a finance app.
So why not build an investing app that opens with a feed of all your friends’ investments and their returns over time? Instead of sharing screenshots on Reddit and Instagram you could prove your investments right in the app.
Note that an app like this would not be about signaling wealth. It’s about signaling being right and the ability to prove it. This is probably an even stronger and more engaging mechanism than signaling wealth – and the reason why I’m still bullish on prediction markets.
Perhaps a well-designed, consumer-friendly prediction market app would be the ultimate proof-of-x social network. Strava for being right.
05 A Closing Ask
While we are on the topic of being right: Do you agree with my thoughts in this post? What other social networks and proof-of-x mechanisms would you like to see?
The easy way to read this chart is that consumers are becoming less interested in finding the cheapest options and are instead searching for the best options. I find that difficult to believe.
A perhaps more interesting interpretation is that Google’s “cheap” search queries are declining because users already know where to find the cheapest option: On Amazon (and other vertically specialized search engines like OTAs).
As we discussed in last week’s article, Amazon – like Google – is primarily a search engine. But since all its search results – unlike Google’s – are products, it’s easy to rank them by price. If you already know what you want, there’s no point in searching on Google first. Your shopping journey starts and ends on Amazon.
But what about the “best” option?
Amazon’s default search results are “Featured”, which factors in a variety of criteria (purchase frequency, availability, reviews, …) to show you the most relevant products. But that’s not the same as the best. (Side note: It actually turns out that the most relevant results also happen to be the most profitable for Amazon). You can also choose to rank results by customer reviews, but those scores don’t feel very trustworthy either.
As a result, non-price driven Amazon purchase journeys initially start on other sites which help users figure out what the best product for them is (through curation and reviews). This is similar to the Shopify model, which relies on discovery channels such as Instagram and Pinterest to drive users to its stores.
In contrast to Shopify though, there is not one – or even a few -dominating channels. Discovery is spread across many, many different websites, which Amazon rewards with its affiliate program. The fact that Amazon just drastically reduced its affiliate fees is perfect evidence of how little negotiating power these individual sites have in this value chain, despite their collective importance.
The high number of affiliate partners also explains why people are still using Google to search for “best” options. Not only do consumers need to figure out what the best product is – they first need to figure out what the best product review site is.
The second problem I see is that reviews only work for a handful of product categories. You can only rank and compare products if they have a strong utility. For example, you can determine what the best TV is by looking at screen resolution or HDR support. These features are easy to measure and compare.
But how would you decide what the best pair of sneakers is? Or the best handbag? You could look at build quality or materials, but those attributes are neither easy to quantify nor do they have an actual influence over what people perceive as the best.
So how do you determine what the best option is when utility isn’t the decisive factor in the purchase decision?
The core idea behind mimetic theory is that human development is based on imitation. What sets humans apart from other species is our ability to learn by observing and copying others. According to Girard, this includes watching and imitating what other people desire.
This is not something most of us are aware of. We think we make autonomous purchase decisions based on objective facts (“These shoes are waterproof”) or personal preferences (“I like the way these sneakers look”).
In reality though, Girard argues, there is never a direct relationship between subject (the consumer) and object (the product). Instead, the relationship is always triangular between the subject, the object and a so-called mediator – someone the subject is drawn to and wants to imitate.
In other words: We don’t actually want the object itself. What we really want is to be like the person we admire. The object is just a means to an end.
The person we are trying to imitate might be a celebrity, but it could also be one of “the cool kids at school” or someone you discovered on Twitter or Instagram.
As a consequence, there isn’t a “best sneaker”. What you perceive as “the best” isn’t based on objective attributes, it depends on who you are trying to imitate.
Advertisers understand this principle really well: you’re not trying to convince somebody that they want Bud Light or a Ford F150; you’re telling them they ought to desire membership to a particular peer set, and the way to become a part of that group is to drink Bud Light and drive an F150. It’s why Abercrombie can advertise their clothes with models that aren’t actually wearing any of those clothes; the clothes aren’t the point.
This is also why influencer marketing works so well and why Instagram has become the perfect discovery channel for Shopify.
04 What Shopify Should Build
As we discussed in last week’s essay, Instagram is both a blessing and a curse for Shopify. On the one hand, it is the perfect discovery channel for the type of products that are typically sold by Shopify merchants: visually appealing objects you didn’t even know you wanted (fashion, homeware, furniture, etc). On the other hand, too much reliance on Instagram can become dangerous. A demand aggregator always has the upper hand over a supply aggregator as evidenced by the high tax Shopify D2C brands have to pay to Instagram in the form of ads.
Nevertheless, further integrating with Instagram is probably a good idea for Shopify. Instagram’s user behavior is a prime example of mimetic desire. Users can scroll through the life of the person they want to imitate to get an idea of what they should desire.
Shopify already announced a deeper integration with Instagram and Facebook last week, now shops can sell directly on Instagram. The ideal feature, however, would allow users to buy objects straight from the feed of their favorite influencers.
While brands will still be important (for signaling, among other things), I suspect that a lot of stores will become commoditized over time. Ecommerce will become more modularized as transactions shift from both retailers and D2C brands to individual influencers.
It’s not hard to imagine a future with a separate Instagram profile tab that lists all the products a user recommends. The user becomes the window display – the actual store is just an API in the background.
Similarly, should Shopify decide to make its Shop app an actual discovery platform, it should build its recommendation feed around influencers – not shops.
Rather than an algorithmic feed with random products, the app should feature collections of products that certain people use or recommend. Apps like Svpply and Kit have tried to build similar product recommendation services, but none of them have ever gained mainstream adoption. Yet I’m still convinced that there is a market for a stand-alone app that does curated product discovery.
05 How Amazon Could Leverage Mimetic Theory
Amazon is not a product discovery platform, it’s a search engine. It works best when you already know what you want to buy. When you search for “Sapiens”, Amazon will give you a variety of options to buy Yuval Harari’s bestseller (audiobook, Kindle, hardcopy, etc). Perfect.
If you don’t have a specific book in mind yet, however, and just want to discover a history book, Amazon becomes useless. It will show you a list of every SKU available that fits the history book description, but no real guidance on which book you should pick.
But what if you could filter and rank search results by mimetic desire?
Instead of a seemingly random list of books, Amazon should now only show me reading recommendations from people I admire. Who these people are could easily be derived from Twitter data, for example (users I follow + whose tweets I engage most with).
Search results are now ranked by my personal memetic score. I can also see at first glance why each particular book in the list is relevant for me. Not only would this feature improve Amazon’s search results, it would also turn the site into more of a discovery platform.
06 Closing Thoughts
Given their respective value chains, Amazon and Shopify both have an interest in becoming better at discovery. Technology companies have a tendency to (try to) solve discovery with automated recommendation engines, but that’s not how we make purchase decisions.
Algorithms are not the reason why we buy things, no matter how good they are. Mimetic desire is.
This is why curation is underrated – not because it is actually better than algorithmic suggestions, but because it is perceived as being better.
If this essay has inspired you to imitate me and my desires, feel free to follow me on Twitter. It would a be great honor to become your mediator.