All Posts

obsidian

Essay Quality Ranker

07 May 2025 — 2 minutes read

Ever found yourself with dozens of draft essays in Obsidian but no clear idea which ones need the most editing work? I did, and that’s why I built EditNext - an AI-powered plugin that ranks your markdown files based on how much editing they need.

The EditNext plugin uses LLMs and linguistic analysis to evaluate your drafts, providing a prioritized list of which documents deserve your attention first. It’s like having an editorial assistant that helps you focus your efforts where they’ll have the most impact.

You can find the EditNext plugin for Obsidian for installation instructions and detailed usage. This tool helps writers:

  • Identify which drafts need the most work
  • Understand specific weaknesses in each document
  • Track improvement as you edit documents
  • Save time by focusing on high-priority edits
  • Leverage AI insights without leaving Obsidian

The plugin analyzes your documents using a combination of AI evaluation, grammar checking, and readability metrics to generate a comprehensive editing priority score.

Example output:

📊 EditNext Analysis Results:
┌─────────────────────────┬──────────────┬─────────────┬────────────────┬───────────────┬───────────────────────────────────┐
│ Document                │ Editing Score │ LLM Score   │ Grammar Score  │ Readability   │ Notes                             │
├─────────────────────────┼──────────────┼─────────────┼────────────────┼───────────────┼───────────────────────────────────┤
│ draft-essay-1.md        │ 87           │ 92          │ 76             │ 64            │ Needs structural work, unclear    │
│                         │              │             │                │               │ thesis, many grammar issues       │
├─────────────────────────┼──────────────┼─────────────┼────────────────┼───────────────┼───────────────────────────────────┤
│ almost-there.md         │ 42           │ 35          │ 58             │ 42            │ Minor flow issues, a few awkward  │
│                         │              │             │                │               │ transitions                       │
├─────────────────────────┼──────────────┼─────────────┼────────────────┼───────────────┼───────────────────────────────────┤
│ ready-to-publish.md     │ 18           │ 12          │ 22             │ 31            │ Polished, minor proofreading      │
│                         │              │             │                │               │ needed                            │
└─────────────────────────┴──────────────┴─────────────┴────────────────┴───────────────┴───────────────────────────────────┘

Install it from Obsidian’s Community Plugins:

  1. Open Obsidian Settings → Community plugins
  2. Disable Safe mode if needed
  3. Search for “EditNext Ranker” and click Install
  4. Enable the plugin and enter your OpenAI API key in settings

If you’re serious about improving your writing, this plugin offers a systematic approach to tackling your editing backlog. It’s especially useful for managing digital gardens, notes collections, or any large set of drafts.

browser-extension

Export LLM conversations as snippets

05 May 2025 — 2 minutes read

I often have deep conversations with AI assistants like ChatGPT and Claude, and want to share these insights with colleagues or include them in blog posts. But copying raw text from these interfaces produces bland, unformatted content that loses the conversational flow. Existing screenshot tools didn’t preserve the conversational format while allowing for text selection.

I’ve created ChatSnip, a browser extension that lets you export AI chat conversations as beautifully styled HTML snippets or well-formatted Markdown.

You can find the extension in the Chrome Web Store or check out the GitHub repository for the source code. ChatSnip helps users:

  • Export conversations from multiple AI models (ChatGPT-4o, Claude 3 Opus, Gemini 1.5 Pro, etc.)
  • Save in HTML format for web embedding or Markdown for documentation
  • Automatically extract conversations from supported AI chat websites
  • Create consistently styled chat bubbles with proper attribution

The extension offers a simple interface with automatic page detection and custom model name support.

Example output (HTML):

<div class="chat-container">
  <div class="user-message">
    <div class="avatar">You</div>
    <div class="message">Can you explain how transformers work in machine learning?</div>
  </div>
  <div class="assistant-message">
    <div class="avatar">Claude</div>
    <div class="message">Transformers are a type of neural network architecture that revolutionized NLP...</div>
  </div>
</div>

Installation via Chrome Web Store:

  1. Navigate to the ChatSnip extension page
  2. Click “Add to Chrome”
  3. Confirm the installation when prompted

Or load it as an unpacked extension:

  1. Clone the repository
  2. Run npm install and npm run build
  3. Load the extension in development mode

ChatSnip makes sharing AI conversations simple while preserving their context and formatting.

interviewing

Flipping questions on its head

02 May 2025 — 4 minutes read

A cardinal sin when it comes to design research is when some one asks a leading question. If you’re interviewing a consumer of Pepsodent toothpaste, you should almost never, ever, ever, ask “How much do you enjoy using Pepsodent toothpaste?”. That’s a leading question, and the answers you might get from this question are usually loaded with confirmation bias. It would be more along the lines of ‘what they think you might want to hear’, rather than ‘what they actually think/feel/do’.

This guiding principle of interviewing without asking leading questions applies to various facets: market research for F&B brands, company interviews, philosophy debates, and even some negotiations. The thumb rule here is to start initiation and ask more generic open-ended questions such as “How does your day in your life look?”, and then gradually narrow down some topics based on what the end-user wants to say/express.

More recently, I’ve created a variation of the the ‘leading question’, and flipped it on it’s head. I call it as the ‘non leading leading question’. I can’t think of a better term without convoluting the crux of what this does: Let me illustrate this 4D chess move with an example.

Let’s say I want that there is a restructuring in your company, and you are being moved to a new team and a new project. Your expectation is that the new team/project is much more ambitious/chaotic, highly charged and even (stressful) at times. (This is counter-intuitive and not what everyone would want). But you are hell-bent on wanting to work in such an environment, and the mythical “work-life” balance is not something you are considering it as much. How do you then gauge if this new team/project is ambitious/chaotic through a series of questions?

What I would do in such situations is to pull up the “non leading leading question” card.

I would ask — “Would there be good work life balance in the new team?”. The natural expectation for the manager here would be to say that there is good work-life balance. It’s not against the grain, and it’s not counter-intuitive. But by doing so, you reveal the true nature of the new team. The inverse of the inverse question has helped you reveal how this team works, in other words, the question has achieved it’s purpose.

The manager thinks this is a leading question (as they think you’re inclined towards changing your role into a new team where there is better work-life balance, but you’re not revealing your cards entirely and playing a bluff here)

If for the same question, “Would there be good work life balance in the new team?”, the manager says, “Actually, Shreyas, you know what, this is a mission-critical project, and a lot of dependencies to get the right outcome.” Even in this case, this question has achieved it’s purpose. You are now being posted into a more ambitious, growth-oriented team hungry for shipping fast, and moving at break-neck speed. This is what you want!

On the contrary, if I had asked — “Would there be too much stress in the new team?”, there might be a risk of getting a lot more smoke signals which make the response unclear.

This was one specific example, but there could be more situations where such non-leading leading questions can be leveraged.

I’d used this in a more recent interview I’d taken. To one of the candidates, I’d asked — ‘How do you ensure that you do ethnographic studies for all your projects?’ (what I expect here is them calling out the bluff, that it’s impossible to do extensive user research for all projects as in reality you’re dealing with tradeoffs of time/effort/complexity etc). Again, the non-leading leading question acts as an inverse-of-an-inverse.

writing

Vibe writing maxims

02 May 2025 — 3 minutes read

Some vibe-writing maxims:

  • While writing, have two windows open: one for the writing, and the other one for ChatGPT. Previously he used to consult ChatGPT a couple of times for internet research, but now the role has transitioned to be a more conversational thought partner, helping you riff-raff on the idea for the essay. (Let’s say you’re writing an essay about tarrifs, and you want to understand what the general consensus is, so that you could provide an unique insight that can counter-position to the general consensus. It’s particularly helpful to provide this). Deep Research is also effective in productizing the ‘secondary research’ part of any essay where you have to crawl all over the internet to then derive the analysis from (Deep Research is basically a Mckinsey level research analyst at your doorstep helping you with this part of the process)
  • It also helps with alternative word seeking. Let’s say we want to find an alternate word to “outrage”, and you can give some references to ground them so that it generates a close cousin you’re expecting. This can be done on a word level, sentence level, or even on a paragraph level.
  • AI can help in fleshing out the essay with a meta-analysis, your first pass roughest-rough draft (even half-way through), you could probe ChatGPT to analyse if you’re getting to the core of the argument quickly? is it building momentum? is it a good opening, or there could be some alternate openings, closings?
  • In essays which involve more argument analysis, I also apply some of these questions inspired by Lex Fridman’s interview style. For eg:
QuestionPurpose
Can you steelman the case for …?Elicit strongest argument for..
Can you strawman the case for..?Address counterarguments for..
Can we break this down into first principles…Analyse from basic axioms
What happens if we take this to the extremes..Test robustness when it comes to the edge cases
  • The more we use AI-assisted writing, there is a big risk of the writing sounding like AI. The line needs to be tread carefully where you use AI for asking good questions, identify flaws, improve/edit content, not to actually put words in your mouth. I also use this quite extensively to remove the filler words such as dwelve, em-dashes and the usual AI-generated copypasta: `Identify and remove any generic or overused phrases that make the writing sound artificial. Replace common AI phrases with more original, specific language.
writing

How I blog with Obsidian, Cloudflare, AstroJS, Github

25 Apr 2025 — 3 minutes read

I’ve been refining my writing and publishing workflow to the point where it feels effortless. It combines Obsidian for writing, AstroJS for building the site, and Cloudflare Pages for deployment.

Everything now lives locally, in plain text, structured neatly for both creative flow and technical control. And this is partly inspired by Kepano’s adherence to the local, plain-text format:

File over app is a philosophy: if you want to create digital artifacts that last, they must be files you can control, in formats that are easy to retrieve and read. Use tools that give you this freedom.

File over app is an appeal to tool makers: accept that all software is ephemeral, and give people ownership over their data.

In Obsidian, I maintain a ‘Blog Post Template’ that includes the necessary frontmatter for new posts. When I’m ready to start a new piece, I simply create a new note using this template. It creates template fields like title, date, draft status, and tags, so I can immediately get to the act of writing without fiddling with metadata.

Having a clean, consistent structure at the top of every post means the AstroJS build process later has exactly what it needs, and I don’t have to think about it while I’m writing.

The vault is connected to the Astro project through simple symlinks. One symlink pulls in the posts folder from src/content/posts/, where all blog posts live. Another brings in the images folder from public/images/. This way, I can edit blog posts and manage associated images directly from inside Obsidian. Embedded image paths, like /images/2025/01/image-11.png, render correctly in both Obsidian preview and the deployed site without any extra steps.

These are the settings I’ve used on my Obsidian vault:

New link format: Shortest path when possible
Use Wikilinks: yes
Attachment folder path: /images (The folder where assets on AstroJS are stored)

There’s a deliberate separation of concerns between my writing environment and my site development environment.

When I’m inside Obsidian, I’m purely focused on the act of writing: clarifying ideas, connecting thoughts, refining phrasing. I’m not thinking about fonts, layouts, or site performance. It’s just me and the words. When I switch over to the AstroJS codebase, my mindset changes.

There, I’m thinking as a designer and developer, tuning the user experience for readers: improving typography, tweaking the reading flow, optimizing load times, adding small details that make the site more welcoming.

This boundary between writing and publishing helps me preserve the integrity of both processes. Writing doesn’t get bogged down in technical details, and development isn’t clouded by the emotional weight of drafting and editing. Each activity gets the attention it deserves.

When a post feels ready, I simply change the draft: true field in the frontmatter to draft: false, commit the change to Git, and push. Cloudflare Pages picks up the update automatically, builds the Astro site, and deploys the changes live, often within a minute. There’s no CMS dashboard to log into, no series of export and import steps, no drag-and-drop interfaces to wrangle. The entire flow reduces publishing to its essence: write, commit, publish.

This system rewards momentum. It stays out of the way. It feels honest and durable, like something that could last decades without needing to change. Most importantly, it keeps the act of writing at the center of the process.

ai-coding

How I build greenfield apps with AI-assisted coding

08 Apr 2025 — 7 minutes read

Building apps with AI-assisted coding can be quite tricky if you start with a blank empty space. Previously I used to prompt the LLMs like a rookie by saying “fix this, add this, build this”, and so on. And this is usually frowned upon in the developer circles, and it seems to be quite an irresponsible way to do AI-assisted programming. But “vibe coding” has so much more to offer to this world, in terms of speed and velocity, and it’s important to not loose sight of the larger goal: to build the right things, and build things right. It’s indeed a weird trajectory that programming has taken recently, and if this works out, why not embrace it?

Any app is only as good as our ability to carefully prompt them. This could make or break the vibe-coded app. I first came across Harper Reed’s blog talking about his own LLM-aided coding workflow, I felt like sharing something similar based on what I’ve learnt. Harper goes through a lot more LLM assistants, but my advice here is specific to Cursor IDE:

To ease things up, and to not write all the code completely, I use the speedrails open-source rails boilerplate to build my SaaS app on top of this. It provides strong conventions for a production-ready Rails 8 app. This is TBH the only Rails boilerplate you would need to get started with most of the use cases.

This is where you have a natural conversation with the latest reasoning model to think the whole design of the app with you. You want the chat assistant to find gaps, poke holes, ask carefully considered questions which you might have not considered.

The assistant is your “philosopher in residence”.


Ask me one question at a time so we can develop a thorough, step-by-step spec for this idea. Each question should build on my previous answers, and our end goal is to have a detailed specification I can hand off to a developer.  

This developer who I am going to hand off to is more comfortable with an approach where the core logic is built first, and then once the function is achieved, you iteratievly build the scaffolding, backend infrastructure, and finally the frontend user experience.

Let's do this iteratively and dig into every relevant detail. Remember, only one question at a time.  
  
Here's the idea: (Idea)

Coming from a designer background, I’d previously attempted to follow frontend-first approach to building the application so that I could visualise the user experience better, but it failed badly when in one scenario, I built a perfect house, without the plumbing, electricity, and the ability to provide shelter. Form should ALWAYS follow function, and never the other way round. This was a trite passage, that i have reminded myself with, in multiple occasions, and with multiple vibe-coded apps breaking miserably when I inverted the sequence of form/function, I was humbled by the importance of this designerly quote.

At the end of the question-storm, you will end with a natural conclusion, you would now need to synthesize this chat thread into something more concrete. This is where you convert this into developer-ready specification.


Now that we've wrapped up the brainstorming process, can you compile our findings into a comprehensive, developer-ready specification? Include all relevant requirements, architecture choices, data handling details, error handling strategies, and a testing plan so a developer can immediately begin implementation.

Create a /docs folder in your project directory, and add this file created under specs.md

Once it creates this, I do another round of “poking holes” just to be sure.

Poke holes into this essay and find gaps wherever possible.

I also exhaust my Perplexity Deep research credits to make an extensive whitepaper based on the specs.md file.

I then carefully examine the tech architecture defaults, and prefer to pick the ones which are LLM-friendly (for instance, as of 9 Mar, 2025, Rails 7.2 is more LLM-friendly than Rails 8.1).

Once I’m confident with the specs.md file, I move on to the next stage.

I prefer to test the specs at each stage of development, and to ensure that the tests pass as planned. Especially when non-coders (such as myself), have no idea if what’s running is actually working or not, this is a great litmus test to progressively expand the scope of the app.


Draft a detailed, step-by-step blueprint for building this project. Then, once you have a solid plan, break it down into small, iterative chunks that build on each other. Look at these chunks and then go another round to break it into small steps. Review the results and make sure that the steps are small enough to be implemented safely with strong testing, but big enough to move the project forward. Iterate until you feel that the steps are right sized for this project. 

From here you should have the foundation to provide a series of prompts for a code-generation LLM that will implement each step in a test-driven manner. Prioritize best practices, incremental progress, and early testing, ensuring no big jumps in complexity at any stage. Make sure that each prompt builds on the previous prompts, and ends with wiring things together. 

There should be no hanging or orphaned code that isn't integrated into a previous step. Make sure and separate each prompt section. Use markdown. Each prompt should be tagged as text using code tags. The goal is to output prompts, but context, etc is important as well. 

@specs.md

It should output a prompt plan that you can execute with aider, cursor, etc. I like to save this as docs/ prompt_plan.md in the repo.

I then have it output a todo.md that can be checked off.

Can you make a `todo.md` that I can use as a checklist? Be thorough.

After each phase, ensure that you also provide the reason as to why the scope of each phase was chosen and how it's stacked.

I do this to also understand why each phase is written in a specific way, and why the order was chosen as such.

As you continue to build the app, you can cross off items from the todo list as shown here in this example app:

# blogggg Implementation Checklist

## Phase 1: Core Infrastructure Setup

### Rails Foundation

- [x] Create new Rails 8.0.1 app with PostgreSQL

- [x] Configure modern components:

- [x] RSpec + FactoryBot

- [x] Database Cleaner

- [ ] Configure Hatchbox.io deployment:

- [x] Implement health check endpoint

- [x] Write infrastructure tests:
...
...
...

Now you have a robust plan and documentation that will help you execute and build your project.

The workflow looks something like this

  • Build the prompt-plan.md, specs.md and todo.md.
  • Set up the boilerplate
  • Set up git version control and keep pushing commits during important milestones
  • Run code phase by phase based on the prompt-plan.md document.
  • After each phase, run integration tests and ensure all of the pass successfully
  • Once successful, move on to the next phase and continue

Now you have a robust plan and documentation that will help you execute and build your project.

Surprising and scary.

mathematics

We have been scammed by the Gaussian distribution club

08 Apr 2025 — 4 minutes read

Taleb insists that we’ve been scammed by the Gaussian distribution club.

The gaussian distribution has become so ubiquitous in our daily jargons, oru day-to-day decisions even.

“We have been duped by the bell curve. Mandelbrot was the first to rigorously prove that markets are not Gaussian.” – Taleb

As most real-world phenomena: especially complex, human-involved systems are not well-behaved in any sense : Gaussian distributions are the exception, not the rule.

My world view has also changed after reading Nassim Taleb’s Fooled by Randomness book, and I’ll form my opinion here in this essay as to why it’s so:

Most of the non-deterministic random events can be classified as either a thin-tail, or a fat-tailed in nature.

If we take the example of the average height of human population, it’s a thin-tailed event, especially since there is a strict upper bound (ceiling) to what the tallest person could be. There are also no complex interdynamic feedback loops that reinforce each other, and therefore, it’s possible to estimate with certain confidence, what the “average height” could be. It can certainly (NEVER) be the equivalent of Burj Khalifa no matter what edge case we might consider for modelling this distribution.

Mandelbrot builds on this idea, and explains that most natural phenomena dont follow such normalised thin-tail gaussian distributions. Instead, they exhibit more “wild randomness”. Mandelbrot’s early work was on cotton price fluctuations, and how he demonstrated this to be incongruous with the Gaussian models.

And it’s not just with financial markets, you could see it all over: wealth distributions, book sales, war, Fukushima, pandemics — places where a single data point (a Black swan), can completely disrupt the average. And to model real-world risks more accurately, Taleb insists we follow Mandelbrot’s Lévy-stable distributions which better explain real-world risks. What is a Lévy-stable distribution?

Example:

  • If 10,000 people each lose $1, that’s $10,000.
  • If one person loses $10 million, that single event overwhelms the rest.

In these examples, there are volatile clusters, price changes in market are more “jumpy”, large changes are more frequent. And are therefore, fat-tailed. And in such systems, it becomes pointless to even do a forecast, as they have infinite variance.

And as a result, it leads to a form of epistemic humility, where you don’t use confidence intervals, don’t use standard deviation, don’t even do probabilistic forecasts. You could throw all the standard deviation math you learnt in school textbooks and put it into the dustbin.

And instead of them, you focus on other aspects of risk-management: what is the maximum loss you can absorb? any non-linear payoffs? any hedging strategies? you might also do more “stress testing” to understand the jumpiness, rather than pointless scenario modelling.

Once we acknowledge that we’re living in a levy-stable world, and not a gaussian world, our decisions change. For example, when it comes to portfolios, in the gaussian world, we might want to diversity across many uncorrelated assets, expecting that some of them might pick up well. But in a Levy-stable world, we acknowledge that market crashes can be 100x more likely than predicted (it’s the rule, not the exception). And therefore, you might switch to a format of the barbell strategy: where 90% in ultra-safe assets (gold, cash, farmland), and 10% in high-risk, volatile, high-optional assets (eg. startup equity, crypto, etc).

Similarly, if we look at insurance, in the gaussian world, an insurance company might expect to sell lots of policies assuming claims average out. But in a Levy-stable world, you acknowledge that one freak event (COVID, Fukushima, 9/11) can wipe out 10 years of profits. The compound tail risk is huge. So in the Levy-stable world, you are spending more time thinking about stress-testing, where you try to limit the maximum exposure to a single catastrophic event. This thinking, even applies to national-infrastructure, cybersecurity etc, where most of the resource allocation goes towards risk-minimisation around the most frequent issues (eg. threats circulated in newspapers, small outages, recent scandals etc), but there could be breaches which are from the unknown unknowns.

Levy-stable world acknowledges us to simulate catastrophic scenarios, not just the average-case scenarios.

DomainGaussian ApproachLévy-Stable (Fat-Tailed) Approach
InvestingDiversify, maximize Sharpe ratioBarbell strategy, seek asymmetry
EntrepreneurshipPlan, forecast ROISmall bets, high asymmetry, fail fast
InfrastructureMean-time-to-failure modelingDesign for rare catastrophic failure
CareerSteady ladder climbingSeek optionality, build many convex exposures
Risk modelingUse standard deviation, confidence intervalsUse stress testing, max drawdown, convex payoff maps

In some product-related decisions, we deal especially with the problem of incentives. In a more crude way, we can treat incentive problems as either zero-sum games, or positive-sum games. I thought that was a great framing, and went on with my worldview, and my life, and my regular product-work, until I found a better framing, a better explanation!

I’ve dived into game theory more recently, and have absorbed some core ideas that help explain situations that deal with people-related incentives better:

(a) to look at every encounter as a game with upsides and downsides due to various actions.

(b) to truly win the game, one minimises the downsides, and increases the upsides as much as possible.

This is the crux of what I have understood, and have been highly influential in acting as a heuristic when it comes to decision making.

And the first thought-experiment which everyone encounters when they hear “game theory” is the prisoner’s dilemma. As a recap, this is a situation where, two suspects are arrested and interrogated separately:

  • If both stay silent (co-operate), they get 1 year each.
  • If one betrays (defects) while the other stays silent, the betrayer goes free and silent one gets 10 years.
  • If both betray, they get 5 years each.
Cooperate (Silent)Defect (Betray)
Cooperate1y / 1y10y / 0y
Defect0y / 10y5y / 5y

And if it’s a single shot prisoner’s dilemma (i.e it happens only once), then the best strategy would be to defect. This is assuming that (temptation to defect >> mutual cooperation >> mutual defection >> worst possible outcome).

Compare this in contrast with another scenario, that of a stag hunt: In this example, two hunters can:

  • Cooperate to hunt a stag (big reward), but it requires both.
  • Hunt hare alone (small reward), but guaranteed.
Hunt StagHunt Hare
Hunt Stag4 / 40 / 3
Hunt Hare3 / 03 / 3

So the best payoff would happen if both the hunters co-ordinate.

FeaturePrisoner’s DilemmaStag Hunt
Type of gameDilemma of incentivesDilemma of coordination
RiskGetting exploited by a defectorBeing abandoned in cooperation
Dominant strategyDefectionNone (depends on expectations)
Equilibrium outcome(Defect, Defect)Either (Stag, Stag) or (Hare, Hare)
Cooperation motiveMust overcome self-interestMust overcome uncertainty/trust

Let’s take an example of two tech startups planning to create a joint standard for “blockchain” ethics.

If we look at this in terms of prisoner’s dilemma:

Each firm can cheat and ignore the standard regulatory framework set by the other. If one cheats (ignores the standard to move faster), and the other cooperates (develops together), the cheater wins market, and the cooperator loses. Both can cooperate, but there is a continued incentive to defect (and that remains).

In the same way, we can also look at it in terms of the stag hunt:

Both firms can either be aligned on a unified blockchain safety protocol (which might lead to a big win), or do their own thing (which is less risky, with smaller gain).

The stag hunt framing applies if both parties benefit most from mutual cooperation, but suffer only minor losses if they act alone. The prisoner’s dilemma framing applies if defection yields a significant individual advantage, and mutual defection is net worse than cooperation.

So which is true? is it a stag’s hunt or a prisoner’s dilemma? Reality could be far more complex, and in this case, it could probably alternate between stag’s hunt and prisoner’s dilemma. And this could be based on, (a) number of players, more actors —> more defective incentives. (b) if one company has more compute/data, it can afford to defect.

A simple thumb rule to distinguish situations into either stag hunt, or prisoner dilemma:

If defecting is always individually better, regardless of what the other person does, it’s a prisoner’s dilemma (closer to a zero sum game). If co-operating is best only if others also cooperate, it’s a stag hunt (closer to a positive sum game).

mathematics

I was wrong about optimal stopping

07 Apr 2025 — 2 minutes read

If you were tasked with a need to find the tallest mountain, and went searching in a far away land surrounded by a series of mountains, how would you finalise the tallest mountain, especially when you could still go farther, and find even more taller mountains (only if you explore even more).

There are various ways to term this, some call it the travelling salesman problem, or the “secretary problem”, or just as the “optimal stopping” problem, which attempts to come to a mathematical decision on when to actually stop in such explore versus exploit situations.

I first saw this mentioned in the book, Algorithms to Live By, and was mesmerized by the practical applications of it, in everyday life. And for years, I thought the optimal stopping problem was all about the timeline. You go 37% of the way through the time period, and you make the big decision if you see a better option. Not too soon, not too late.

But that was so wrong. It was not just “reject the first 37% of your time window, and then choose the next best”. Mathematically, it’s about the number of discrete options (aka decision counts), not time.

If we were to extend this to hiring candidates:

  • Wrong: “We’ll interview for 3 months, then decide.”

  • Right: “We expect ~20 candidates. Reject the first 7–8, hire the next best one.”

For career moves:

  • Wrong: “Switch jobs after the first 5 years.”

  • Right: “Try ~10 career directions? Explore 3–4, then commit to the next clearly better one.”

For startups:

  • Wrong: “Spend 1 year scanning ideas, then commit.”

  • Right: “If I’ll vet ~25 ideas, benchmark the first 9, then jump on the next standout.”

The only catch here is that you will not be able to know the total n in advance. How might you be able to take a guess on the total number of job offers, partners, or startup ideas that you can have? It’s based on this. But when you do have a very concrete value for (N), it becomes easy (eg. hiring rounds, apartment hunting etc)

rough-notes

Thinking like a ship

05 Apr 2025 — 4 minutes read

It took me a long time to realize that arguments we argue about — aren’t always about facts. They are about values.

Reading Jonathan Haidt’s The Righteous Mind made this clear, that: be it liberals or conservatives, or activists or traditionalists — they’re all wired with different moral priorities—care, fairness, loyalty, authority, sanctity.

They feel different things are sacred. What seems obviously right to one clan feels intuitively wrong to another.

My partner and I had taken the Haidt’s moral foundations questionnaire together recently, and it was fun to see the contrast of responses in some of the questions: we both cared about fairness and compassion—but whenever I leaned toward equity, she leaned more towards loyalty and cultural continuity. That was interesting:

Neither of us was wrong, but we realised we had different inner compasses. And without naming those differences, we mistook friction for betrayal.

So when I later read Holden Karnofsky’s ship metaphor—and the analogy of Rowers, Steerers, Anchors, Equity, Mutineers—it clicked like a lock.

In Holden Karnofsky’s world view,

The Rower values progress. Those who are rowers, want to row to a far away promised land, full of opportunities.

I use "rowing" to refer to the idea that we can make the world better by focusing on advancing science, technology, growth, etc. - all of which ultimately result in empowerment, helping people do whatever they want to do, more/faster. The idea is that, in some sense, we don't need a specific plan for improving lives: more capabilities, wealth, and empowerment ("moving forward") will naturally result in that.

Rowing is a contentious topic, and it’s contentious in a way that I think cuts across other widely-recognized ideological lines.

To some people, rowing seems like the single most promising way to make the world a better place. People and institutions who give off this vibe include:

The Anchor, stability. The Steerer, foresight. The Equity clan, justice. The Mutineer, systemic overhaul. They’re not selfish or evil—they’re guided by different lights. Everyone on the ship is doing what feels morally necessary to them. And everyone thinks they are the chosen ones destined to steer the ship. And that’s exactly the problem.

Some insist we’re not moving the ship fast enough. They row with fierce intensity, believing speed is salvation.

Others clutch the wheel and bark about direction: where are we even going? What storms lie ahead? Some cling to the railings, saying we’ve gone too far already. They want to drop anchor, keep the hull intact, preserve the rituals that gave the ship its soul. And then there are those below deck shouting that the entire structure is rotten. The keel is cracked. The map is a lie. The ship was never meant for all of us.

Sometimes, we would also observe various erosions occuring due to mismatched moral vocabularies. One person might cry, “We must go faster!” and another hears, “You’re ignoring those overboard.” A third hears, “You’re disrespecting the captain,” and a fourth yells, “Why are we even on a ship?”

In the same way, Jonathan Haidt puts values into different buckets such as (care, fairness, loyalty, authority and sanctity), Karnofsky puts people into various such buckets.

So while Haidt helps us understand the ship, Karnofsky provides a grounding conceptual framework that helps me map what’s really going on. It’s a fight between value systems, and not between the “right” and the “wrong”, as we define it conventionally.

And nobody is right or wrong, they are just right in their own way, and their ways are very different.