Hello friends, family, supporters, and Segment-ers, it’s good to chat with all of you again.
For me, and I’m guessing, for many of you, last week was a hectic week. My last day at Segment was Monday, when the acquisition with Twilio officially closed. Tuesday through Friday saw us through a very tumultuous election. Saturday the results were finally called, and the Castro ERUPTED in celebration.
I won’t dwell too much on the election here, but I’m exhaling a sigh of relief upon seeing the results and the transition website. The future is bright!
I haven’t quite settled on a true format for this newsletter, but generally speaking, I plan to cover…
things I’m learning
things I’ve written
things I’m building (still WIP for the moment)
If you have any thoughts, feedback, or other areas to cover, please shoot me a reply. I’m always excited to dig in more. I may experiment with keeping these emails shorter, and instead linking out to more docs/links.
Without further ado, let’s dive in.
Computational Thinking With Julia
I’ve started taking an online course offered by MIT called Introduction to Computational Thinking. It covers a wide variety of topics: viral disease modeling, statistics, NLP, and image modification. If you are into learning a little bit of math, science, and data modeling, you’ll feel right at home with the material.
A new education paradigm
I find the course itself interesting for a number of reasons.
First, it’s designed from a remote-first perspective. Lectures are anywhere from 5-35 minutes, and are hosted on Youtube. The lecturers are also some of the world’s best educational Youtubers—if you’ve ever watched a 3Blue1Brown math video, that’s the sort of quality you see in this course. It’s full of engaging content, concrete examples, and people who actually know how to product a well-designed online class.
Second, the course is taught entirely through Pluto Notebooks. Pluto is pretty similar to Jupyter (the Python notebook) but with a number of enhancements. The handling of state is far better, instead of having a complex and often hidden state machine, the state you see is the state you get. Additionally, Pluto is programmed with Julia—a language designed specifically for scientific computing and working with things like matrices, statistics, and image processing.
The result makes for an incredibly delightful experience playing with little bits of data and instantly seeing their visualizations reflected in your notebook. If you’re curious at all, I’d highly recommend checking it out.
What follows are some quick notes on what I’ve learned. Feel free to skip to the end.
The course starts out with image (and matrix) convolution. Convolution ends up being a fairly simple concept once you see a few examples.
The general idea is this: take every point in a matrix, and combine it somehow with the nearby or neighboring elements. For a generic matrix, this involves taking the values around an individual element, multiplying by some additional “kernel” matrix, and then averaging the result. Then we move on to the next element.
For an image, we can imagine it as a matrix of pixels. We’ll go pixel by pixel, and for each one, multiply that pixel by the center element of our matrix. We’ll combine that value with the value of each surrounding pixel, multiplied by the weights of our kernel.
This animation does a decent job of showing convolution in action.
To give you a more concrete example let’s suppose we have an image of a little chipmunk…
Let’s start with the “identity” kernel. In this case, the “1” at the center is given full weight, and the values around it are all zero.
The result is that the image is unchanged. Each center pixel gets its exact same output, but there’s no change when it comes to the output image.
But what happens if we give each pixel equal weight… and combine it with the neighboring pixels?
The result is a blurred image. We’ve now muddied each individual pixel with it’s neighbors! It’s the average of it’s 8 closest friends!
Blurring is cool, but there’s more exciting things we can do. By applying negative weights, we can start picking up edges. The following matrix will pick up “edges” in the horizontal direction.
If there is a light spot on the left, and a dark spot on the right, our matrix will pick it up and output a high value. We’ll give slightly more weight (weight 2 vs weight 1) to the pixels directly to the left and right.
Combining a convolution of this matrix with it’s pair in the vertical direction gives us a very nice way of detecting various edges!
Knowing where the edges are allows us to do some very cool stuff. The most interesting application is when it comes to re-sizing various images. Take this example of a person looking at a castle.
Traditional image tools do a poor job of re-sizing various images. They will do one of two things. Photoshop will remove half of the pixels, resulting in an image that looks squished (see how tiny the castle below gets?!)
Figma will take a different approach, it will crop images, which isn’t quite what we want either.
Fortunately, we can combine the ideas we’ve seen so far to re-size in a smarter way called seam carving. Seam carving removes the unnecessary information in an image to give us a smaller image that looks far more natural (see below), where the sky is removed from the image.
The general principle to remove a single column of pixels from the image works like this…
run an edge-detection convolution on the image to find the areas where there is the most “information” or energy stored
build a path from each top pixel of our image to the bottom, following the path of least resistance (where there’s little energy). at each step, the path can move to the pixel directly below it, or one of it’s two bottom neighbors.
remove whichever path contains the least total information
I’ve sped up the following image just to get it to fit in an email/webpage, but here’s the idea. At the beginning, the “lowest energy paths” are outlined in white. As the image shrinks, those paths and pixels are removed to preserve the important pieces of information, while still making the image smaller.
The result is much better!
Invest like the best
I started more earnestly getting into a few new podcasts this week. The first one is Invest Like the Best by Patrick O’Shaugnhessy. I’m not normally into most VC podcasts, but Patrick actually does a very good job of getting interesting guests and asking them interesting questions.
I’m including my brief notes here in case you find them interesting.
Interviews with Zillow CEO Rich Barton and Altimeter Partner Brad Gerster
About Rich: Rich is the founder and CEO of Zillow. He's also founded a number of other companies: Glassdoor, Expedia, has been a venture partner at Benchmark. He's also on the BoD for Netflix.
About Brad: Brad is a partner at Altimeter, who provides various forms of later stage VC. Gerster led the investment in Snowflake, and shares a lot of anecdotes from that experience.
Power to the people: The biggest thesis Rich calls out is the idea of bringing "power to the people". When he was growing up, it always used to annoy him that he would have to wait for a travel agent to type some instructions into a computer. "Why can't I just do this myself?" That's how Expedia was born.
It turned out that idea was a powerful force throughout many of his next companies. Zillow allowed users to spend more time researching which houses to buy than any agent ever would. The "Zestimate" also turned out to be a huge SEO play. It let individuals get a taste of what their home could be worth in a way that they never had before.
The Wizard of Oz in founders: In Barton's view, good founders need three things... a heart, a brain, and courage.
Courage can be the toughest of the three to diagnose. When is it a good idea to take a big risk that will fundamentally change the course of the company? When is it bound to fail? Barton sees cofounders as being a key to taking those big risks. Non-founder-led companies often won't do it.
Company fitness: There's some interesting discussion around 'fitness' of a company. How operationally rigorous is it? Barton: when you have 93% gross margin as part of a media business, it's really easy to become sloppy. Their new CFO was brought in from Amazon to make the company 'fit'. Being 'fit' feels good, you're faster, stronger, and ready to take risks.
They talk a lot about Frank Slootman of Snowflake in this dimension. Frank has been through hard times, where the market compresses or things don't feel like they are on track. It's not that he's against massages or smoothies, or that sort of thing, but he knows that a company will succeed when it’s most operationally fit.
Interview with Anu Hararian
About: Anu is a partner at YC Growth. We didn't interact with her as much as Ali, but she is incredibly sharp and it was interesting to hear her thoughts on the later stage market. She was previously at a16z, and before that debugged hardware systems at Qualcomm. She says her superpower is debugging, understanding how the system works at a deep level, and where it might be off.
What growth stage partners look for: When investing at the growth stage, partners look for three major things: 1) market, 2) clarity of thought, 3) progress to date. The best founders have incredible clarity of thought when it comes to understanding both their market and what needs to be built.
Doordash is a great example of this. They went after the suburban market, which had been previously owned by grubhub. Grubhub didn't solve the last mile problem, which Doordash capitalized on.
Brex founders are another good example. They took an accounting class to prove out their thesis that the market for startup finance was as big and interesting as they'd thought.
As an aside, I've noticed this in my own handful of angel investments. The ones who seem most promising are those who can thoughtfully articulate what they are doing and why they are doing it.
International markets: Anu is bullish on Latin America, Indonesia, and India. Each of these countries are experiencing large increases in GDP per capita, which drives consumption similar to China.
When and how much to raise: fundraising is both and art and a science. In the early stages (Series A, Series B), funds typically want 15-20% of the company. You can basically multiply out how much funding you are raising, and then the valuation as a function of that 15-20%. In the later stage, things become much hazier. Each fund will make something like ten 50-100m investments, and they are only expecting a 5-10x return on each. One interesting point is that there are not a ton of different growth stage firms. When you get to Segment sizes, you have to raise from the same ~10 top-tier firms.
Focus on upside: a key shift in Anu's mentality was shifting from a thesis of "how might this fail" to "how big could this be if it succeeds". It's a little bit different than the standard way of designing systems that she was used to thinking about as an engineer.
Blue Ocean Strategy — an interesting take on how to differentiate yourself from the “red oceans” where everyone competes, and strive for “blue oceans” which create new value.
Radical Markets — a ton of very interesting ideas here which seem to nicely bridge the gaps between socialism and capitalism.
The Art of Doing Science and Engineering — a mixture of life lessons and information theory fundamentals.
I shared a few thoughts this week on leaving Segment and what’s next. If you’re on this list, odds are good that you had an incredible impact on everything on this post. Link
I wrote up my notes from The Power of Habit. I’ve noticed that quarantine was disruptive to a number of my routines, many of which I didn’t fully realize I had. I’d highly recommend if you’re trying to develop new habits. Link
There’s not much to show here yet. Currently I have a few ideas that I think are probably best solved with Rust + WASM. If you’ve got great resources for picking these up, please send them my way!