Gilad Lotan takes on Embeddings


You can’t go ten minutes without someone talking about AI, but most of the time, it’s hype without substance. Almost Human exists to change that; to dig past the noise and reach the sharp ideas, technical breakthroughs, and human stories that actually shape the future.
Let’s cut through the noise.
Gilad Lotan takes on Embeddings


How can values create value? On this podcast, Michael Eisenberg talks with business leaders and venture capitalists to explore the values and purpose behind their businesses, the impact technology can have on humanity, and the humanity behind digitization.
Gilad Lotan takes on Embeddings
Gilad Lotan takes on Embeddings

Gilad Lotan takes on Embeddings
Gilad Lotan takes on Embeddings
Most people hear “AI” and think prompts and chatbots. This episode goes deeper—into the infrastructure layer that quietly powers it all: embeddings.
Eden talks with Gilad Lotan, Head of AI/Data Science & Analytics at BuzzFeed, about how turning text (and behavior) into vectors unlocks search that actually understands meaning, personalization that works with less data, and brand-safe monetization without heavy PII. We break down the math in human terms (cosine similarity, “king − man + woman ≈ queen”), show how BuzzFeed uses embeddings across hundreds of millions of content items, and explore what happens when you cluster not just content—but people by taste. We also dig into cost realities (vector DBs vs. model inference), why you don’t need to build a foundation model to be an AI company, and the adtech future when “intent” shifts from keywords to user embeddings.
If you’re an entrepreneur or operator wondering where the real product leverage is in AI—this is the layer to master.
What you’ll learn
- Embeddings 101: why vectors beat tags/keywords for meaning
- Recs that improve with less signal (and fewer privacy headaches)
- Clustering users by taste vs. averaging away what makes them unique
- How user embeddings could rewrite adtech beyond keyword intent
- Cost & stack gotchas: vector stores, update cadence, where to spend
- Multimodal on the horizon and when text alone is enough
Please rate this episode 5 stars wherever you stream your podcasts!
00:00:00:00 - 00:00:22:10
Eden Shochat
I've actually known Gilad for 30 years now, and I was excited to see some of the early Twitter clustering work that even today you wouldn't actually see in most companies. We'll explore what embeddings really mean. Why is this important for entrepreneurs, and not just a kind of a byproduct of something that people don't need to know and why
00:00:22:10 - 00:00:33:14
Eden Shochat
you don't actually need to build a foundation model to be an AI company, but you also don't do a GPT wrapper and call yourself this new AI type company.
00:00:54:06 - 00:01:24:00
Eden Shochat
Welcome to the first episode of Almost Human. A podcast that explores today's AI world. What's hype, what's transformative, and how it reshapes life for investors, startups, and Israel tech? This podcast is not just for deeply technical engineers, but also for entrepreneurs or professionals in the space. Eager to cut through the noise. I'm almost excited. AI has its buzzwords, but every once in a while you hit a concept that quietly underpins everything, and suddenly you can stop seeing what you can do with it.
00:01:24:01 - 00:01:45:14
Eden Shochat
For me lately, that's embeddings. Most people, even seasoned engineers, still think of them as a way to make text lookups work in a RAG system. But embeddings are way bigger than that. They're how you can cluster content and people by tastes like behavior, like patterns you didn't even know were there. That shift unlocks entirely new ways to build products.
00:01:45:18 - 00:02:07:20
Eden Shochat
Think about what happens if, instead of selling keywords like Google AdWords, you start selling user embeddings? Entire industries from adtech to affiliate marketing gets rewritten and there is a deeper layer. Once you operate at the embeddings level, you realize alignment itself changes, feed an LLM tokens, it plays by the rules, feed it embeddings, and suddenly you're closer to machine code.
00:02:07:21 - 00:02:33:23
Eden Shochat
The spirit of the law doesn't apply the same way. That's both exciting and unsettling. To explore all this, I'm joined by Gilad Lotan, head of AI data science and analytics at BuzzFeed. His work sits at the intersection of data, culture and creativity, understanding why people click, share and connect. Together, we'll explore what embeddings really mean, why they're the hidden layer behind so much of AI's power, and what opportunities this creates for entrepreneurs.
00:02:34:01 - 00:02:43:19
Eden Shochat
I'm Erin Shochat, equal partner at Aleph. Welcome to Almost Human. Let's cut through the noise. So, Gilad. Welcome.
00:02:43:21 - 00:02:47:06
Gilad Lotan
Thank you. Eden. It's great to be here. And it's always so nice to see you.
00:02:47:07 - 00:03:05:08
Eden Shochat
Let's jump right in. So most people think about prompts and chat bots when they think about AI, but embedding is is kind of this infrastructure layer. Some including you would probably call it unsexy, right. But it's the cornerstone of anything and everything about LLM. Can you cover this quickly?
00:03:05:09 - 00:03:34:23
Gilad Lotan
Yeah, absolutely. Absolutely right. LLMs are the least kind of sexy of the AI topics, and it's the most important thing that nobody talks about or is sort of under severely under hyped. So I'm glad we're having this conversation and I've seen it, have you know, the these, the development and capabilities of embeddings over the past few years have really opened up, like immense opportunities for us as a content company.
00:03:35:00 - 00:04:01:12
Gilad Lotan
But any honestly, any service that has to deal with, content, which includes many, many of the kind of tech companies that we know and love. I think one of the core problems where historically with data science and I've worked in the field for many, many years, you're always trying to understand what the content is about, and you're very limited by tags, by named entities.
00:04:01:14 - 00:04:23:18
Gilad Lotan
We used to use, like Tf-Idf, where we try to count the words and normalize and, and see sort of the frequency of appearance of words in this corpus of this other corpus. So there are all these NLP tricks that, we learned that were very useful but were super limiting in really understanding what a piece of text or what a string is about.
00:04:23:20 - 00:05:05:14
Gilad Lotan
And, over the past ten years, there have been all these, like really, incredible advances in our ability to and to extract semantic meaning from a string. And so now, rather than saying this text is in this category, we're classifying and saying, this is about celebrity, this is about sports, this is about news. You can get in a really sort of nuanced understanding and say the appearance of these words together, make it so that I can match, you know, this string with a different string that has very different words but captures the same semantic meaning?
00:05:05:16 - 00:05:28:01
Gilad Lotan
So I'd say that capability is, really unlocked by embeddings and the ability to say this text is similar to this text, but not because they have the same words, but because they have similar meanings. And so that's a huge, huge, huge difference and huge unlocking for products there are so many opportunities that we found.
00:05:28:01 - 00:05:51:22
Eden Shochat
One of the early examples of, of embeddings was king minus men, right, plus woman equal queen. Right. How do you explain that? Like if people don't realize of how meaning turns into math, which is effectively what embedding is all about, right. So why does this math the sentence math works.
00:05:52:00 - 00:06:19:07
Gilad Lotan
Yeah. So so the the interesting thing about embeddings is that they're effectively a layer from a neural network that has been trained on, let's say, a corpus of texts. Right. If we're talking about text embeddings. So you train this, neural network that has all these layers. And, what you're doing is effectively extracting one of the layers, usually close to the, close to the output.
00:06:19:09 - 00:06:44:15
Gilad Lotan
So you have you're still operating in, high dimensional space, right. And you're effectively turning a string or words into a vector. And so what you commonly do with embeddings is say, okay, here's a string, let's embed it. That means turn it to a vector. And here's another string. Let's embed it. And then let's calculate how similar they are.
00:06:44:17 - 00:07:15:08
Gilad Lotan
And so you can you're effectively operating in vector math. And you're calculating cosine similarity. You can subtract add vectors. And it's sort of because the neural network is trained on semantic meaning, you can effectively operate with math on words, which I think is very intuitive for anyone who uses the tools but may not be, quite as intuitive for, folks who are not in the space.
00:07:15:10 - 00:07:35:07
Gilad Lotan
But it makes it it's makes this really useful because, it makes the, this is the, the action of trying to find something similar. It's a simple math operation or the act of trying to find a group of items that are similar, similar to each other. Again, pretty simple math operation.
00:07:35:09 - 00:07:59:11
Eden Shochat
Which is interesting because it's not just it's it's a distance, right. So you can search for something as similar, but you can actually say, hey, I'm, I'm looking for a piece of content that is similar to this, but is as dissimilar to this as possible, right? Because that's also a distance operation. Right. And so you can actually cut and choose between different.
00:07:59:16 - 00:08:06:22
Eden Shochat
It's like and I think we'll speak about this later is content filtering, of the past right.
00:08:07:00 - 00:08:20:09
Gilad Lotan
Yeah. So I can give us a specific example that may help, understand this. So, you know, with BuzzFeed, we write a lot of content. And we've used embeddings in many, many ways, but one way.
00:08:20:09 - 00:08:38:11
Eden Shochat
So not not not everyone is familiar. Like how huge is BuzzFeed? Like, just give, give some numbers of articles number view like because that embeddings are really cheap, unlike LLMs are expensive. So your scale there's no way really to analyze every piece with an LLM.
00:08:38:13 - 00:09:01:15
Gilad Lotan
Oh yeah. I mean, we have hundreds of millions of pieces of content. We have video, we have, text. We have long form, short form, we have news under HuffPost. We have lifestyle brands, like, one of them is called Tasty. It has some of the largest, social media pages, like the biggest Facebook page and TikTok pages.
00:09:01:17 - 00:09:24:23
Gilad Lotan
And then BuzzFeed is an entertainment and lifestyle website, hundreds of millions of users. So again, one of the largest digital publishers, and, a lot of our data, obviously there's the content that we create, but also the signals around the content. So who's consuming, how often they come back, what actions are they taking on the site, etc..
00:09:24:23 - 00:09:54:06
Gilad Lotan
And so we use, actually we've sort of integrated embeddings into our, kind of metadata suite and kind of feature feature stores. So every time we publish a new article, there's a job that runs, creates an embedding, integrates it into their recommender systems. Right. So the new article can be considered to be displayed near other articles. Or to certain users that display preferences.
00:09:54:08 - 00:10:22:21
Gilad Lotan
And then there. So that's sort of one very, standard approach to why embeddings are very useful for us as a content company that wants to show users content. But there are also a handful of internal tools, that help our, writers be a lot more efficient. And so we write a lot of shopping content, and we work very closely with Amazon, Etsy, and other kind of e-commerce shops.
00:10:22:23 - 00:10:48:23
Gilad Lotan
And, one of the things that our teams have always struggled with is, they're they're always putting lists together, lists of items to promote. Right? We have these databases of different products that we've promoted in the past. We have all these metrics around them. But it's always been very hard to, query this database because we need to create tags, in place.
00:10:48:23 - 00:11:09:00
Gilad Lotan
And many of the like the articles on BuzzFeed are not necessarily your typical kind of standard categories. So there could be a, a writer that wants to create a list of shopping items for, you know, young couples who just moved to New York City. We want like, a list here. Here are the ten things you need to get.
00:11:09:02 - 00:11:36:01
Gilad Lotan
And what embeddings do so well is they can capture because they have the semantic understanding of text. They can identify, the, the items that are most relevant and most similar to this string, you know, furniture, young couples, New York City and that even just that capability itself has saved us, many, countless hours for a team and and has made the team a lot more efficient.
00:11:36:03 - 00:12:02:17
Eden Shochat
That's an awesome point. Like internal tooling. And I think, implementing AI in in many companies, people are afraid initially of just showing that to a user. But building internal tools and just making things more efficient is a really interesting topic. The but you actually, you make a point of the, of how should we relook, how a user is defined.
00:12:02:17 - 00:12:27:14
Eden Shochat
Right? Because a user, especially in media sites, is defined by what's interesting for that user. So I assume that now if you think of BuzzFeed, you say, hey, we have we're a marketplace, that there's some attention the user is willing to give, and we need to show that user the most appropriate piece of content and the most minimal amount of time, and that that happens.
00:12:27:14 - 00:12:47:12
Eden Shochat
It's almost an ongoing process. And it's also educating the system of what was of interest for future, for, for future viewings. How how is that stack built? Like, how do you very quickly define what's of interest for a specific user when they come in?
00:12:47:14 - 00:13:09:21
Gilad Lotan
Yeah. Great point. So, I think it depends. So there there are many pieces to this system. But in you're effectively talking about a recommender system. Right. And how do we build, recommender system that matches the right content with the user at the right time? Historically, you know, we would use approach approaches that are not personalized.
00:13:10:01 - 00:13:34:14
Gilad Lotan
That work pretty well for many kind of portals. And big websites. You would, use this approach called multi-armed bandit where you're it's a form of reinforcement learning, but you're very kind of quickly looking at signals and updating, what you're recommending. But it was hard to personalize using that approach. And there were ways to kind of create clusters or cohorts, for personalization.
00:13:34:14 - 00:13:55:02
Gilad Lotan
But it wasn't true. Like at the user level. And one of the challenges had always been, you had to go through this pretty rigorous approach to sort of feature engineering. Okay. What do we know about the user? Where do they come from? Have we seen them before? Have they clicked on something? Okay. How do we represent their interests.
00:13:55:04 - 00:14:20:00
Gilad Lotan
By tags or keywords or topics. And it was it never really is never really great. I mean, I think that's where when you had social networks and social media, you could use their social graph to understand what they may be interested in. Right? So that was a pretty useful signal. But what we can do now with embeddings is we can say, okay, well, we, you know, new users we don't know much about.
00:14:20:00 - 00:14:48:18
Gilad Lotan
We know the article you're on right now. We might may know where you came from. So we could certainly use that information. But if we have any signal about a user and usually we do like they've clicked, they've seen our articles before. We could start to, represent a user by using this math, by looking at the embeddings of the articles that they clicked on or the articles they spent the most time on, or the articles they saved or actions they took on the article.
00:14:48:18 - 00:15:10:15
Gilad Lotan
So so you could start to use this, vector representation of interests as, really convenient way to represent a user. And we've seen, pretty meaningful improvements when we now make the matches between our content and a user that's represented, using this approach.
00:15:10:17 - 00:15:43:02
Eden Shochat
Well, the the other big sort of matching marketplace obviously, is AdWords. Right? That's the most efficient, business machine probably ever built to this day to now, 4000 engineers at Google are trying to figure out like what's what's AdWords like? Where's that going? And what do you think about embeddings tied to a user is actually really interesting because media sites historically have not been the best monetized because you didn't have the intent of the user.
00:15:43:04 - 00:16:10:17
Eden Shochat
But now that intent went away for Google as well, and effectively understanding the user, having those kind of embeddings about the user describing what's of interest for them is a new form of a matching engine, potentially. So how does that change for media sites? The core business notion of, intent versus finding the right ad units for the right user.
00:16:10:19 - 00:16:34:06
Gilad Lotan
Yeah, I think I think the difference for media sites now is you could do a lot more with less, less signal. And so we can use, I would say historically media sites would not be well positioned to have like captured a lot of information unless we were talking about loyal users who logged in and came back
00:16:34:09 - 00:16:58:10
Gilad Lotan
frequently. So there may have been, you know, viewers who came for a handful of articles, there wasn't enough signal to truly, you know, tailor campaigns and target campaigns that these users and you could do a lot more with less signal now, a user, and then sort of identify adjacent campaigns, and include users in targeting for those campaigns.
00:16:58:12 - 00:17:24:17
Gilad Lotan
I would say, you know, the biggest problem for media companies is just the size. And, you know, it's very hard. Even a large digital media company, it's very hard to compete with the scale of a platform like Google or Amazon, or Meta, who have sort of endless inventory. But I would say I think it's easier to target now, this stuff, is much more accessible.
00:17:24:17 - 00:17:45:14
Gilad Lotan
You don't need like a massive, like 4000 engineers, right? You could do this, with a much smaller team and with a few sort of existing capabilities. So I do think it's, the advances that we're seeing is giving smaller players the ability to, you know, be, you know, own much more of the stack, especially in the advertising space.
00:17:45:16 - 00:18:15:00
Eden Shochat
It actually, you can imagine a world where, the database somebody is describing the user, embeddings, be it somebody else is, is actually, distributed. Right? It's not like centralized. Google obviously has the signal, which is, hey, you search for that. But for that matter, meta has always historically had the database about people. So you could actually see, different media companies collaborating because it's kind of a normalizer.
00:18:15:02 - 00:18:32:08
Eden Shochat
Right? Embeddings are embeddings are embeddings. A user is interest. I can actually collect more information of what's of interest for a specific user. And what's and multiple users. So this this could probably change the adtech stack. In, in more than one way.
00:18:32:10 - 00:19:01:13
Gilad Lotan
Okay. Yeah. And I think the, I think the, I mean, all the third party providers and, and the advertising world is, has been changing, pretty dramatically, especially since kind of Apple, and like, added all the, sort of all the changes to its mobile platform. Right. Although Chrome was about to launch, launch these changes and then they kind of took a step back.
00:19:01:15 - 00:19:29:09
Gilad Lotan
But historically we used, we didn't use much, I guess historical data about users. You just needed to know that a cookie, you know, a page view is associated with a cookie that you've seen, that you have an email and you can kind of cobble together, a view of a user across websites. So you had all these third party players, be really critical to form it to creating kind of targeting for ad campaigns.
00:19:29:09 - 00:19:37:08
Gilad Lotan
And I think, yeah, I think there's a lot more that could be done within websites now. But also if, publishers kind of band together.
00:19:37:10 - 00:19:41:20
Eden Shochat
That's a sort of, invisible IP mode of who the user is, what they're interested with.
00:19:41:20 - 00:20:09:00
Gilad Lotan
And it's it's especially important now, I mean, I think if you talk to any publisher, anyone who works in media, like, everyone talks about direct traffic. And so we're in a world where there's less and less, passing of users from one website to another. And the publishers and media companies that will survive this period are the ones that have a brand that people know and go directly to.
00:20:09:02 - 00:20:34:02
Eden Shochat
Yeah, it's I fully agree. That's that's really I'm just saying it's less of a PPI So you also have, PII, less PII issue because effectively you're just describing people with a, 1536 vectors. Right. Of, of just scalar space. So it's not, it's not that you know, anything unique about them other than what their interests are with.
00:20:34:04 - 00:20:35:21
Gilad Lotan
Yeah. As a vector.
00:20:35:23 - 00:20:36:16
Eden Shochat
Yeah.
00:20:36:18 - 00:20:38:06
Gilad Lotan
I know your vector.
00:20:38:08 - 00:21:02:19
Eden Shochat
Yeah. If I look at costs and gotchas, right. It's like if we make it seem as if that's, you know, it's it's an obvious thing and everyone should do this. But if you look at the early days, cloud used to be about 10% of, of company cost structure. So your, your cloud costs. And I now see companies with lands taking about 30% of their budget off of the cost of service.
00:21:02:19 - 00:21:05:18
Eden Shochat
So that's usually expensive. I.
00:21:05:20 - 00:21:08:08
Gilad Lotan
Are they training though. Are they building their own?
00:21:08:08 - 00:21:36:03
Eden Shochat
No inference. Actual inference. If you if you just take advantage of of a staring model. And that's why distillation is, is, important for companies that are scaling on the LLM side, but actually, embeddings, if you look at the kind of cost structure of embeddings, like what's what's the big gotchas, that if you could travel back in a year, that you would, you would advise yourself, to, to take notice of embeddings
00:21:36:03 - 00:22:07:19
Gilad Lotan
are actually not, I think the vector database, the vector store, if you're using a hosted, vector database, that is probably the most expensive piece, the, embeddings themselves, very much available and very, you know, very efficient and, you know, not cost prohibitive. So you can. Yeah. As long as you have static content, which we do for the most part, you know, we publish a piece of content.
00:22:07:19 - 00:22:38:13
Gilad Lotan
It doesn't change. If it does. It's not that often. So we don't have to recreate regenerate an embedding. So for our content, it's very inexpensive. I think it's it's different for user at the user scale. And that's where, where, this, this area is a lot more experimental, where we're creating an embedding for every user. Obviously if we updated every few minutes, that would be pretty expensive, prohibitively expensive.
00:22:38:15 - 00:23:11:03
Gilad Lotan
But we don't if we don't need to update it or if there are ways for us to kind of, update embeddings without these inference calls, like a, you know, we updated daily. I think the, the costs, the cost is manageable. The place where we are saying the highest cost is obviously the rich media. So the generative media image video, like inference and models and that's obviously a different topic.
00:23:11:03 - 00:23:19:22
Gilad Lotan
But that's hurting. That's hurting a lot. But on the tech side, text side, it feels, very accessible.
00:23:20:00 - 00:23:44:00
Eden Shochat
But with the multimodal I wonder, like with the multimodal models, that are coming out, many of them don't yet have embedding generation. So I assume if like one, one workaround would be, hey, let's look at this image, let's generate textual representation and then use embeddings. Have you experimented yet or are you excited with some of the multimodal stuff that is coming up?
00:23:44:02 - 00:24:09:04
Gilad Lotan
Yeah. We haven't found a great use case for it just yet. Although, you know, I think the obviously embedding images along with text, could be really valuable. We've just found that the, the text that we have is enough, and all the metadata that we have is sufficient. And we get really great results with just embedding the text.
00:24:09:06 - 00:24:49:01
Gilad Lotan
So we haven't, yet included the images, themselves as part of kind of the representation of, a piece of content. But I could see it coming. I could see it, you know, I could see there could be a future use case where it is, valuable. I think there's also a question of how, you know, as this, as this capability becomes more, yeah, integrated into more product surfaces, like, what do we embed, do we embed the article, we embed the, author information, we embed, maybe updates, we embed adjacent stuff.
00:24:49:01 - 00:25:12:21
Gilad Lotan
And so you could you could start to get to a point where you have multiple embeddings for every article and also multiple embeddings for every user. They represent different aspects of a user, like one maybe clicks another, maybe likes another, maybe social things their friends like, you know, so so different features that then are used in recommender systems.
00:25:12:23 - 00:25:22:04
Gilad Lotan
So that can scale exponentially, especially as, as this capability gets, integrated into our stack.
00:25:22:06 - 00:25:47:11
Eden Shochat
That's, that's super interesting, like clustering, finding the different facets about, a person. So, having a well-rounded person just, according to the number of embeddings and kind of the distance between the embeddings that represent that same user means, the bigger the distance of meetings, they're interested with more things or different topics or, which, which is pretty like what's, what's your diameter of interests.
00:25:47:13 - 00:25:50:05
Eden Shochat
Right. One could one could think of I love that.
00:25:50:05 - 00:26:16:19
Gilad Lotan
Yeah. One, one thing we found is actually we have to be really careful when we average out your interest. So if we take, for example, we take a user's clicks and we average them out. There are, instances where the average is actually meaningless. It doesn't tell us much because it's if you think of all these different points, the average is in the middle.
00:26:16:21 - 00:26:41:11
Gilad Lotan
And so what we've actually found is by clustering our articles and our topic space, so we take all our, all the content that we publish, we create clusters and then we we effectively relay users to these clusters. And we have, a better sense of kind of the interests a user has, rather than just simply the math, simply averaging into the mail.
00:26:41:13 - 00:26:55:00
Gilad Lotan
And so there's yeah, the clusters play a really critical role, and it's become one of our dominant features. And in all our like our classification models our predictive models and our, recommender systems.
00:26:55:02 - 00:27:10:23
Eden Shochat
Now, what's what's the quickest win. And right if I, if I don't think of, like RAG systems that just generally use embeddings in order to find text that might be relevant to push into the context, what's what's the quickest win, maybe for, 1 or 2 types of companies.
00:27:11:00 - 00:27:39:21
Gilad Lotan
Yeah. So any any content company, it's, I mean, recommender systems, we saw double digit improvement in our in click through. So that's like very clear. I think the other piece is retrieval. So not I'm not thinking RAG but just effectively search internal search or different ways to kind of comb through your items that you or your corpus of, of items as a company.
00:27:39:23 - 00:28:04:04
Gilad Lotan
It could be, Slack messages. It could be, customer service. You know, texts like I think it could be a range of things, but as long as it includes, you know, text that has semantic meaning, you could, really query, pretty flat in a flexible way and get related content.
00:28:04:06 - 00:28:36:22
Eden Shochat
This is awesome. That was my conversation with Gilad Lotan, head of AI data science and analytics at BuzzFeed. And I think it's clear that embeddings aren't just another buzzword. They're a new building block. Whether you're clustering content and clustering people, or rethinking how business models like adtech might evolve, embeddings open up entirely new ways to create value. If this episode cut through some of the noise for you share it with a friend or colleague who's building an AI and make sure to follow Almost Human so you don't miss what's coming next, I'm Eden Shochat, thanks for listening.
00:28:37:03 - 00:28:39:20
Eden Shochat
We'll see you next time.
Follow Gilad on LinkedIn: https://www.linkedin.com/in/giladlotan/
Follow Gilad on X: https://x.com/gilgul
Subscribe to Almost Human here: https://www.aleph.vc/almost-human
Learn more about Aleph: aleph.vc
Sign up for Aleph’s monthly email newsletter: https://newsletter.aleph.vc/
Subscribe to our YouTube channel: https://www.youtube.com/@aleph-vc/
Follow Eden on X: https://x.com/eden
Follow Eden on LinkedIn: https://www.linkedin.com/in/edens/
Follow Aleph on X: https://x.com/aleph
Follow Aleph on LinkedIn: https://www.linkedin.com/company/aleph-vc/
Follow Aleph on Instagram: https://www.instagram.com/aleph.vc/
Follow Almost Human on X: https://x.com/almosthuman_pod
Follow Almost Human on Instagram: https://www.instagram.com/thealmosthumanpodcast
BuzzFeed, HuffPost, Tasty, Amazon, Etsy, Google, Meta, Apple, Chrome, AdWords, Gilad Lotan, AdWords, Slack
Executive Producer: Dalit Merenfeld
Producer: Dalit Merenfeld
Video and Editing: Ron Baranov
Music and Creative Direction: Uri Ar
Content and Editorial: Dalit Merenfeld and Kira Goldring
Design: Uri Ar
Follow Gilad on LinkedIn: https://www.linkedin.com/in/giladlotan/
Follow Gilad on X: https://x.com/gilgul
Subscribe to Almost Human here: https://www.aleph.vc/almost-human
Learn more about Aleph: aleph.vc
Sign up for Aleph’s monthly email newsletter: https://newsletter.aleph.vc/
Subscribe to our YouTube channel: https://www.youtube.com/@aleph-vc/
Follow Eden on X: https://x.com/eden
Follow Eden on LinkedIn: https://www.linkedin.com/in/edens/
Follow Aleph on X: https://x.com/aleph
Follow Aleph on LinkedIn: https://www.linkedin.com/company/aleph-vc/
Follow Aleph on Instagram: https://www.instagram.com/aleph.vc/
Follow Almost Human on X: https://x.com/almosthuman_pod
Follow Almost Human on Instagram: https://www.instagram.com/thealmosthumanpodcast
BuzzFeed, HuffPost, Tasty, Amazon, Etsy, Google, Meta, Apple, Chrome, AdWords, Gilad Lotan, AdWords, Slack
Executive Producer: Dalit Merenfeld
Producer: Dalit Merenfeld
Video and Editing: Ron Baranov
Music and Creative Direction: Uri Ar
Content and Editorial: Dalit Merenfeld and Kira Goldring
Design: Uri Ar