Liran Tam Takes on Meta Learning


You can’t go ten minutes without someone talking about AI, but most of the time, it’s hype without substance. Almost Human exists to change that; to dig past the noise and reach the sharp ideas, technical breakthroughs, and human stories that actually shape the future.
Let’s cut through the noise.
Liran Tam Takes on Meta Learning


How can values create value? On this podcast, Michael Eisenberg talks with business leaders and venture capitalists to explore the values and purpose behind their businesses, the impact technology can have on humanity, and the humanity behind digitization.
Liran Tam Takes on Meta Learning
Liran Tam Takes on Meta Learning
Liran Tam Takes on Meta Learning
Liran Tam Takes on Meta Learning
00:00 Intro - Meta learning in one line
01:07 Who is Liran? Guitars, ML physics, robotics, JoyTunes
01:36 This show is for founders, not academics
02:06 Why humans adapt fast (and models don’t)
03:21 Meta learning in plain English (few-shot, quick adaptation)
04:02 The “42 scans vs 2M” oncology example
04:38 How to attach few shots to a broader adjacent dataset
06:40 Use case #2: Less compute, per-user personalization (recommendations)
08:35 Business lens: how close does the teacher dataset need to be?
09:17 Domain pitfalls and cross-domain wins (video → robotics)
11:34 Medical imaging transfer across different organs
12:12 Domain-base models & how far transfer can go
12:52 Why researchers love meta learning for generalization & reasoning
15:12 Speed to adoption; cyber as a meta learning playground
18:09 Moats shift: from “most data” to “most diverse domains”
20:08 Founder checklist: how to validate meta learning fast
22:06 Pitfalls: meta-parameters, inner/outer loops, debugging
24:04 Quickfire: Liran’s AGI definition & timeline
25:25 US vs China on AGI progress
26:08 Eden’s close + CTA
Meta learning flips the script: you don’t win by hoarding the most data - you win by adapting the fastest. In this episode, Eden sits down with engineer and researcher Liran Tam to demystify meta learning for founders. We cover how to get real performance from tiny datasets, when to use adjacent tasks to supercharge learning, and why personalization and low-compute inference are perfect use cases. We also dig into transfer across domains (from video to robotics), opportunities in cybersecurity, and how the moat is shifting from “most data” to “most diverse domains.”
If you’re an early-stage builder asking “Do I need Google-scale data to compete?” this one’s for you.
Please rate this episode 5 stars wherever you stream your podcasts!
sure—here’s the transcript with the time codes removed:
Eden Shochat
Welcome to today’s episode of Almost Human. Today, we’ll be talking about Meta Learning, it’s the way to teach AI how to learn. So instead of needing millions and millions of CT images, or, cyber history of the entire domain, you can actually teach the AI model from an adjacent distribution how to best use limited amounts of data. It’s an equalizer for early stage startups that don’t need a hundred million dollars in order to train a new foundation model.
Eden Shochat
Today, I’m joined with Liran Tam, uh and who I actually know before AI was really considered AI, right? It was… 14 years back?
Eden Shochat
Da da da da.
Unknown
Da da da da da da da da da da da da da da da da
Unknown
da da da da. Oh oh.
Eden Shochat
Um, you rebuilt the physics of guitars. Almost 20 - that is incredible.
Eden Shochat
So you built guitars, right? But the physics engine behind guitars with machine learning, and then in, Simply at the time it was called Joy Tunes built, the engine, the recognition engine. You've you've done robotics, you've done almost anything and everything when it comes to new data sets and new machine learning approaches.
Eden Shochat
We're not targeting AI academics, the people we're targeting, our founders, entrepreneurs. It's the goal is for them to walk away when they understand what meta learning is. They're not panicked about it. And we'll just focus on how it changes the game for early stage founders.
Liran Tam
Right.
Eden Shochat
Awesome. So if you had to summarize what meta learning is in, 20,000ft, like what is.
Liran Tam
So maybe before that we as humans has this incredible ability to, generalize and to adapt to new task very quickly, very efficiently? Right from really early stage of development, we we can pick up new skills, with just a few examples. We can, pick up, patterns and generalize. And this is, really, an inherent or fundamental trait in, in intelligent behavior really.
Liran Tam
Right. So, we don't exhibit that behavior usually with, even even large foundation models, they don't adapt efficiently to new tasks. Right. Take ChatGPT, for example. You can teach it new trick, right? And it's called inference or in-context learning. Or you can fine tune it to some, out of the main or some, similar distribution, but generally it won’t generalize as good as humans and, and especially not with just few examples.
Liran Tam
That's the general case.
Eden Shochat
Yeah.
Liran Tam
So to the rescue, meta learning. Right. So meta learning is about training models to learn how to adapt quickly to new tasks. That's that's the premise with often with very little data or let's, let's more precisely, it's to very efficiently it could be the adaptation phase, the tuning phase will only have just a few gradient, backpropagation steps or we just, a handful of examples.
Liran Tam
So that's I think the, we broad brushstrokes, strokes. That's meta learning.
Eden Shochat
So if I look, and one example that, many people use is oncology, right? It's like if you look at fine tuning or even the, pre-training, a lot of it is just memorization and then building abstractions. But if if I, as a startup, have, 42 images and Google has 2 million, I would chance do I have?
Eden Shochat
And so how how is this example a good example of meta learning and if so, like how does it work? Like how do I teach it without needing that 2 million, data set?
Liran Tam
Well, so first of all, there are no free meals, right? So, if you if you have the labeled training data, large labeled training data for your, downstream task, then, probably nothing beats, the supervised learning or semi-supervised supervised learning, setting. Let's say you have 42 labeled examples of, of, X-ray images of some rare and new medical condition.
Liran Tam
Right. That's that's the setting. No, no matter how hard you work, that's all the training data that you can, curate. Right. So with meta learning, in order to to do something with that, to train the model with that, you need to attach it to a larger, data set that has similar, distribution or similar, or draws from the similar domain domain that is easier for you to, to, to, to have access to.
Liran Tam
Right. So, let's say the medical condition A is rare and, unique. And only 42 images of X-ray, images are available, labeled X-ray images are available. Then what you can do is, tap into some other couple of hundred, similar tasks, let's say classification tasks of, of other medical conditions, not the rare one. And then train the model not to excel on the specific medical condition, classifying medical condition
Liran Tam
A let's say but to understand the distribution of the task themselves. Right? So once you do that, you have a pre learned meta learner. And then you can, tune it, with your few examples to your downstream task. So that's, that's one use case. And meta learning excels, it really shines in that use case.
Liran Tam
Another use case, maybe when you have limited compute budget, right. Either in the training phase or especially in the, test phase, in the inference stage, after inference as a test phase or after deployment. Right. So let's say, for example, you have you want to do a recommendation system that is personalized per user.
Liran Tam
It could be for medical, it could be for streaming services. Could be like what LinkedIn did for recommending job opportunities. A paper that they published, I think in 2024, I want to say, so the best candidate candidate that will give you, that's short of solving that problem would be meta learning where you have, just a few examples on the large set of users.
Liran Tam
So each user will be considered a different task. Right. And instead of of, of training a huge model that try to pick up all the latent abstractions and all the representation, of the data, on top of all the users, you can keep the model really tight and really mean, in terms of, compute capacity, it only needs to, to have a capacity to, to, to do
Liran Tam
well on a specific user. Right. That's that's a that's, between 1 and 2 orders of magnitude less than what you need when you train a large foundation model. Right. And you train that model in under the meta learning, paradigm to do, well not on that specific user, but to do well at the tuning phase. Right. So all right, so that's the premise of the that's the the general idea I think.
Eden Shochat
So. So I love this. So first, if I zoom out to the business side. So market's too small for there to be data collected or market’s too backwards for there to have collected information at all. Right. And then the question becomes is what if you look at that kind of adjacent data set that you can actually train on like how close does it need to be like the teacher data set to the student data set.
Eden Shochat
Like what what actually could work? Because I won't take irrelevant information. It won't help me of actually creating that easier to learn with a smaller data set. So how close does it need to be?
Liran Tam
Well, that's that's one of the pitfalls of meta learning in general. So how do you define the domain of the task. Because what you really want to do is train the model on the domain of the task that you want to tune or adapt the model before deployment. Right? So, I can give you extreme examples of the, of things that works.
Liran Tam
That's not the guarantee of, of, of, you know, that's not the general general case. So there are some really successful works that show, how you, at test time, you meta train on, video, sequences with previous sequences of, people doing tasks, different tasks, and then at the meta test or when you tune the model, you tune it to a robotic arm that does things on a real environment.
Liran Tam
Right? So we're talking about two, very different domains that the overarching, arching, behavior is the same. Right? The some, some mechanical thing, the human body or, robotic arms is behaving in some way or another in an environment. So that's that's a success story. I'm talking about, a deployable model in the industry.
Liran Tam
Right. So there are quite a few success stories of, people that or the companies that train the model to, to do well at meta training phase on fictitious or irrelevant environments or not too similar environments with not even, robots and then tune it every week or so to do some, a new task, on the assembly line.
Liran Tam
Right. So that's one extreme, I think use case scenario. Another more straightforward scenario is where it's intuitive that the domain is, is, we're talking about the same domain, right. If we're talking about our example with, x ray, then it's it's really intuitive. What's the domain? The domain is the classification of medical conditions.
Liran Tam
They could they can be very different than the different diseases or whatever. But they should talk the same language, right? They draw from the same, input distribution data. It's x ray images. They could draw from similar images, medical imagery, like, CT scans, but, it doesn't have it doesn't even have to.
Liran Tam
There are a set of papers to prove that. It doesn't even have to be the same organ, the same, part of human body. Right. And it works very well.
Eden Shochat
Is there an opportunity or maybe is there already a database of, domain base models? Because effectively, what you're saying is, hey, I can actually teach the model to know x ray so I can teach the model to understand space and time, right, for robotics. And I'll teach it via video. So is there an opportunity or, are the meta learning base models, is just too different, between themselves or between each other?
Eden Shochat
It goes back to the question of like, how close does the base model or the, the pre meta learning need to be?
Liran Tam
So there's this, re found, popularity, sorry, of meta learning in the research community, to utilize meta learning as one of the bedrock, really foundation of of abstracted thinking and and and reasoning and, and generalization, problem solving. Because the meta learning, paradigm enables you to really generalize, meta features. Right. So, if, if will jump few years, to the future, then I believe meta learning is will play a significant role in, in in something similar to a foundation model that excels in adapting to any kind of, scenario or environment or task.
Liran Tam
It's stuff like that. But we are we're not there yet. Right. So, if you if you train a meta learner on, let's say, the specific modality, audio to, let's say, speech recognition, you have this, highly impaired speech with the, electal component meaning to say, a very private, speech, either, by, phonetic or semantic, criteria to the specific user.
Liran Tam
And you train a meta learner to specialize in learning a specific user. This will not transfer to, let's say, robotic arms, do something, some behavior in an industrial, setting, it will not it will not transfer. Having said that, there are quite a lot of, of, of research work that prove that in a really broad domain like robotics or, or on the modality of images or videos or audio, whatever audio means, like speech or music or, sound production things.
Liran Tam
So, then transfer learning, especially in the setting of reinforcement learning, transfer knowledge or, and skills, very meta knowledge and meta skills very well across subdomains.
Eeden Shochat
It totally did. I think that, there's definitely an opportunity, at least in, in kind of in specific domains and then creating a database that just jump starts and, and effectively allows you to validate that meta learning would be good, a good tool for a specific. But then, maybe you would start a, you know, a model fresh after, you know, that it’s interesting.
Eden Shochat
But we'll talk about this in a second. Would you raise also the speed of adoption to customer use cases. Right. So Israel obviously being a cyber country and cyber attacks like every technique is different than another. It's the same domain. But when people do zero day exploit it usually is a very different area or different methodology of how they attack.
Eden Shochat
So, how it is it is indeed meta learning, interesting, in adopting within that domain, there's just different workflows of, of, of a specific attack. If I look at cyber.
Liran Tam
So I think that again, it's, it's only my intuition. But, when I consider papers that are, that deals with that domain, I think there's a huge opportunity for meta learning in the cyber, cyber space, cyber security space, because, a lot of the, although the environment changed, let's say a network for a specific network topology of, of company A versus, large scale topology of company B with all these tiers of, layers of, you know, of networking that, a, a large set of the, of how people attack, these networks or these services has to do with how we as humans think and
Liran Tam
operate. Right? So there are a lot of shared, patterns that are central to or Legoed together to create, a new or virus or new, whatever attack to, against that network. So meta learning, I think meta learning could, could really excelled that because, one of the problems is you learn something at a specific, topology or network, right?
Liran Tam
We learn that, specific attack, and then you, you're having a hard time to apply it in a large scale network or in specific domain network. Right. So or for specific API based service. So meta learning can generalize. That's the underlying features that or patterns that build that specific attack. So it's, so there was a lot of work being done in the Academy on that.
Liran Tam
I don't know, specific, startups that utilize meta learning in that domain. But it's a shame, really.
Eden Shochat
No, it's it's again, and this is the goal of Almost Human, right. Like there's so there's so much technology development that is applicable to business problems. But most of the mapping is in people's minds, in researchers minds versus, the entrepreneur. So all right. So if I go back to first principles right. So common wisdom today is that data and distribution are the moats in the
Eden Shochat
AI era, is effectively what you’re saying is that meta learning is pushes that more much more towards distribution because you need less data, in order to tackle a specific AI problem.
Liran Tam
And I think, I think it's, it's a an excellent conclusion I think meta learning shifts the mode from who's having the, the largest and, and, I don't know, the raw data sets to who's having the most diverse, data sets, right? So you don't have you don't need to have large number of examples from a specific domain.
Liran Tam
It's better if you have, a small number of examples that already, explain or present the domain. Right. But a large number of domains. I think the moat will be shifted, not eliminated. That's one say one thing to say about the moat. Another thing is, a lot of things, a lot of, opportunities that are, that are overlooked, because you don't have enough data and to solve them.
Liran Tam
Right. So suddenly if you realize meta learning, paradigm then, then then that, that gives you a potential solution to something that doesn't have enough data.
Eden Shochat
So I'm sold, fine. So. But, I'm entrepreneur. I want to figure out whether meta learning is the right approach to the problem I want to tackle, what's how how do I validate that, that's an interesting problem. Or that is a relevant problem. Like what's the fastest way if I'm not a machine learning person.
Liran Tam
If you can, find similar tasks to your problem that acts or behave in the functional, layer, not at the feature layer and not at the specific, pixel, layer level. If you can find similar tasks to your domain and you have, only a fraction of what is needed to utilize the, the canonical or the, the traditional, supervised learning or supervised learning, pipeline, then what you can do is experiment very fast with, some open source code.
Liran Tam
And, there are there are quite a few frameworks that it, that implemented, the domain or even the SOTA of the state of the art, algorithm meant learning and, and, and experiments very fast. I'm talking days right now, not weeks or months, because because it doesn't it doesn't, necessitate, a shift in your, radical shift in your what you are used to do.
Liran Tam
Right? You just need to assemble training data and apply the model. It's a little bit more, involved in that, but but it's you can, you can experiment very fast on, on your data and your problem with meta learning and just see if, if, if the problem sticks, if it solves the problem.
Eden Shochat
So the flip side of that same questions like what pitfalls should entrepreneurs expect when they do meta learning. Like what's the failure models that you see there most commonly.
Liran Tam
Yeah, unfortunately. Unfortunately there are quite a few. So since meta learning and works by applying, I'll say several resolution of several timescale resolution of, of, of optimizing your, your, your target. There's this inner loop and outer loop. So, there are quite a few parameters that you need to or meta parameters pun intended, that you need to, to set and to determine in order to, make the, the thing work.
Liran Tam
Right. So it's rather it's, it's more complicated. It's more involved than just, you know, playing with learning rate and maybe, few, couple of, of parameters in your, optimizer. So that's one pitfall. Another people pitfall would be it's, it's, harder to evaluate or to debug what is happening when stuff doesn't work right with any deep learning mechanism.
Liran Tam
Stuff doesn't just work. From the get go, you need to do to optimize your method parameters and to, to do some experimentation. So with meta learning, it's really a more involved process. I want to say more, but not much more involved. Because it's it's something feasible.
Eden Shochat
It's mostly being able to quickly understand, like, is there value? Right. And it could be that there's still 20, 30% improvement. But this is this is applicable. This is workable.
Liran Tam
Yeah. Yeah. That's that's maybe the main pitfall.
Eden Shochat
Yeah. Yeah. Well a few quick questions. This is awesome. Few quick questions I three what's AGI for you. Like what would you define as AGI?
Liran Tam
Oh, wow. Yeah. For me, AGI is when an agent, it doesn't have to be embodied in any products, right? An agent would be able to do well to perform well on most of what the average person can do. Right? The, I think the pinnacle would be if the average researcher, like in the scientific domain, can be replaced with an agent with a trained engine agent, leveraging AGI.
Eden Shochat
That's what. When is that? What is that definition? Right. When do you think if you had the.
Liran Tam
April 2:00 Monday. No, I'm kidding, within ten years. I'm not sure that there would be an AGI phase right.
Eden Shochat
Just jump to superintelligence, but, Yeah, that that's that's very short. And I think it's, So you're playing it's.
Liran Tam
Actually within our lifetime. Yeah.
Eden Shochat
But. Yeah. So you're playing it's a US or China.
Liran Tam
US. It's definitely US. Yeah. I think China is in that regard is, is a little bit, that bigger tiger. Like what of a lot of research that they produced. But, I don't think it's as cutting edge, and established as what the West can, what the West produces or at least it's, on par.
Eden Shochat
Yes. So we this it's the second episode so far. One China, one us. We'll need to the tiebreak at some point. And this is.
Liran Tam
Awesome. I know that you are a strong believer that China would be. It might be or I don't know.
Eden Shochat
Yeah, I know it's, there's an energy problem. Like China is just much, much better than the US in pushing energy through. So, even at the most basic level, I think, US is, screwed. But yeah, anyways, this is awesome.
Eden Shochat
What I love about meta learning is that it levels the field.
It’s not about who has the most data anymore - it’s about who adapts fastest.
If you’re building an early-stage company, that’s your wedge. You don’t need to beat Google at scale, you just need to learn faster in the places they overlook.
That’s the kind of thinking we’ll keep exploring here on Almost Human: where founders, not giants, redefine what’s possible.
If you have any questions, comments, or recommendations for future guests, feel free to email us at almosthuman@aleph.vc - and who knows, we may use your feedback, send you merch, or give you a shout out on the show!
If this episode cut through some of the noise for you, share it with a friend or colleague who’s building in AI. And make sure to follow Almost Human so you don’t miss what’s coming next.
I’m Eden Shochat. Thanks for listening. We’ll see you next time.
Simply, JoyTunes, Google, ChatGPT, LinkedIn, Meta Learning, Liran Tam
Follow Liran Tam on LinkedIn: https://www.linkedin.com/in/liran-tam-12a2bb/
Subscribe to Almost Human here: https://www.aleph.vc/almost-human
Learn more about Aleph: aleph.vc
Sign up for Aleph’s monthly email newsletter: https://newsletter.aleph.vc/
Subscribe to our YouTube channel: https://www.youtube.com/@aleph-vc/
Follow Eden on X: https://x.com/eden
Follow Eden on LinkedIn: https://www.linkedin.com/in/edens/
Follow Aleph on X: https://x.com/aleph
Follow Almost Human on X: https://x.com/almosthuman_pod
Follow Aleph on LinkedIn: https://www.linkedin.com/company/aleph-vc/
Follow Aleph on Instagram: https://www.instagram.com/aleph.vc/
Executive Producer: Erica Marom Chernofsky, Uri Ar
Producer: Dalit Merenfeld
Video and Editing: Dalit Merenfeld
Music and Creative Direction: Uri Ar
Content and Editorial: Dalit Merenfeld and Kira Goldring
Design: Uri Ar
Simply, JoyTunes, Google, ChatGPT, LinkedIn, Meta Learning, Liran Tam
Follow Liran Tam on LinkedIn: https://www.linkedin.com/in/liran-tam-12a2bb/
Subscribe to Almost Human here: https://www.aleph.vc/almost-human
Learn more about Aleph: aleph.vc
Sign up for Aleph’s monthly email newsletter: https://newsletter.aleph.vc/
Subscribe to our YouTube channel: https://www.youtube.com/@aleph-vc/
Follow Eden on X: https://x.com/eden
Follow Eden on LinkedIn: https://www.linkedin.com/in/edens/
Follow Aleph on X: https://x.com/aleph
Follow Almost Human on X: https://x.com/almosthuman_pod
Follow Aleph on LinkedIn: https://www.linkedin.com/company/aleph-vc/
Follow Aleph on Instagram: https://www.instagram.com/aleph.vc/
Executive Producer: Erica Marom Chernofsky, Uri Ar
Producer: Dalit Merenfeld
Video and Editing: Dalit Merenfeld
Music and Creative Direction: Uri Ar
Content and Editorial: Dalit Merenfeld and Kira Goldring
Design: Uri Ar

















































































































































































