David Magerman


How can values create value? On this podcast, Michael Eisenberg talks with business leaders and venture capitalists to explore the values and purpose behind their businesses, the impact technology can have on humanity, and the humanity behind digitization.
David Magerman


How can values create value? On this podcast, Michael Eisenberg talks with business leaders and venture capitalists to explore the values and purpose behind their businesses, the impact technology can have on humanity, and the humanity behind digitization.
David Magerman
David Magerman

David Magerman
David Magerman
00:00 - Intro
02:55 - Why Wall Street’s Smartest Fund Rejected Finance People
06:04 - If You Can’t Close, You’re Useless
08:42 - The Hedge Fund That Invented Google Culture
12:58 - We Got Rich Solving Problems That Didn’t Matter
15:55 - How Jim Simons Proved Economists Were Wrong
17:23 - Success Was the Only Thing That Made Me Happy
18:28 - Why My Team Was Afraid of Me
21:56 - The Day I Realized No One Actually Cared
25:42 - Why Renaissance Was a Moral Failure
27:51 - I’m Glad I Made All the Wrong Choices
39:08 - AI Has Always Been a Scam
41:53 - The Truth About AI Demos No One Admits
44:55 - We Gave a Dangerous Technology to Children
45:33 - This AI Bubble Makes the Dot-Com Crash Look Small
52:38 - AI Is Killing the Next Generation of Engineers
55:47 - We’re Raising a Workforce That Can’t Think
56:42 - Most AI Use Cases Are Stupid
1:03:39 - How Tech Makes Money by Hurting People
1:08:34 - Why “Startup Nation” Is the Wrong Story
On this episode of Invested, Michael sits down with David Magerman, a co-founder and Managing Partner at Differential Ventures, a New York-based venture capital fund investing in deep technology companies that help power the data-driven economy. Previously, he spent the entirety of his career at Renaissance Technologies, widely recognized as the world’s most successful quantitative hedge fund management company. He helped found the equities trading group at Renaissance, joining the group in its earliest days, playing a lead role in designing and building the trading, simulation, and estimation software. David holds a PhD in Computer Science from Stanford University where his thesis on Natural Language Parsing as Statistical Pattern Recognition was an early and successful attempt to use large-scale data to produce fully-automated syntactic analysis of text.
David is also the founder and president of Tzemach David, an education-focused foundation working to make education in Israel more accessible to English-speaking immigrants. Outside of his foundation's work, he is a significant contributor to the OU-JLIC in Israel, Koren Publications, Lori Palatnik's Momentum Unlimited, The City of David, and Rabbi Shlomo Katz's Shirat David, as well as some other more under-the-radar projects supporting the development and strength of the Land of Israel.
Please rate this episode 5 stars wherever you stream your podcasts!
David Magerman (00:00)
AI is an automatic machine gun.
Michael Eisenberg:
Right.
David Magerman:
And we're giving it to children.
Michael Eisenberg (00:04)
That’s a pretty negative view.
David Magerman:
I learned about the history of AI. It is littered with over-promising and under-delivering.
Back then it was nerdy. You could get beat up for being a computer scientist. So I eventually switched to computer science, and a professor–
Michael Eisenberg:
Did you get beat up?
David Magerman:
I hid out in the computer lab.
You are a bear. You are a monster. Like, we were terrified of you. Apparently I threw a monitor across the room once when we had some problem that was my fault.
They said, “We are not here to help America. We're here to make money.” And basically we were just an investment club using our brains to make money for a small group of rich white people.
When we started working there, like, a great day was making $20 million in a day. I was so unhappy. And the only thing that made me happy was success.
Michael Eisenberg (00:43)
Welcome back to another episode of Invested. I'm thrilled to be here with David Magerman. Welcome David.
David Magerman (00:52)
Thank you, good to see you.
Michael Eisenberg (00:54)
So for those who don't know, David was a partner at Renaissance Capital.
David Magerman:
Renaissance Technologies.
Michael Eisenberg:
Renaissance Technologies, sorry, for many, many years. Tell us the story, first of yourself and then Renaissance.
David Magerman (00:58)
Myself, I mean, I grew up in South Miami, kind of the poor part of town. My father was a taxi driver. My mother was a secretary. They didn't go to college. And I was raised on this tradition, on this religion of achievement, like academic achievement. Be successful. That's the way to get ahead. And I did pretty well. Went to Penn, University of Pennsylvania, for college, Stanford for my PhD. Studied computer science.
I was building language models in the 1980s and early 90s before people knew what they were.
Michael Eisenberg:
We're going to talk more about that.
David Magerman:
Sure. And then realized that we were too early. I was working at IBM, a group there, the speech recognition group that was focused on building language models for machine translation and speech. And we were just too early. It took like weeks and weeks on hundreds of computers to train our models. So a bunch of us realized that there were better things to do with our time. And I followed some of my colleagues to Renaissance Technologies, quantitative hedge fund run by Jim Simons, famous mathematician, and really got them going with equities trading.
They were a commodities and a currency shop that was capped out at like a half a billion dollars, and they didn't know how to trade equities. And my bosses, Peter Brown and Bob Mercer, helped them take the system they had, which wasn't working, and with my programming help and my system design help, made into something that did work. And we went from managing half a billion to managing 10 billion within 10 years. By the time I was running equities trading, it was making three quarters of the company's money.
Michael Eisenberg (02:48)
Why did anyone think that what you did at the time was relevant to equities trading? We need to explain that to people.
David Magerman (02:55)
Sure. So Jim Simons had this idea that math was what mattered. And math means using data, and using formulas, and building predictive models that were mathematically correct. And it didn't matter what you were modeling. So we had particle physicists. We had astrophysicists. We had biologists, we had applied mathematicians. We eventually had statisticians, although initially we didn't have those.
And these brilliant people, smarter than me, far smarter than me, were brilliant at each of their fields, and they each brought a unique perspective to modeling something. And if you studied finance, you studied financial markets or economics, you were going to be in the groupthink of everyone who, for the last few hundred years, studied capitalism. Jim didn't want that. He wanted unique perspectives that no one else was going to have thought of–how to do things, so you could find new things that no one else was looking at, so you could carve out signals in the markets that could make us money for a long time.
And when we got there, Peter, Bob and I, what they didn't have was computer scientists. So they were writing code in COBOL, and they were writing code in C, in broken C code, and they really didn't have an idea how to build a big system. So they could trade a few commodities, they could trade a few currencies. But we're talking about trading thousands of stocks. The computer science complexity of that, the architecture you need for that is very different. And so they were struggling to make that work. And I understood language, which was irrelevant at the time. And I understood some information theory, which they taught me a lot more. But really what I understood how to do was to build robot software that could run programs for weeks on end without crashing. And eventually we could use that as the basis for our trading systems.
Michael Eisenberg (04:55)
Was there nothing in your understanding of language that mattered to this?
David Magerman (04:59)
Never. And I shouldn't, I should be careful what I say, because I have to be cognizant of my restrictions of what I–you know, revealing secrets. But they didn't bring people when they hired Peter Brown and Bob Mercer and me, and a couple of other people from the IBM speech group, probably about 10 or 12 of us came in the fullness of time–people assumed we were doing natural language understanding. We were doing speech recognition, listening to recordings of the CEO speaking.
It really was just our ability to do science. We needed PhDs, not because they had the degree, but because if you finished a PhD, you were a closer. You could accomplish something. If you were at all–if you were a brilliant person, it was all but dissertation. If you completed all your requirements of your PhD, except for finishing your thesis, you were the opposite of what we wanted. You weren't a closer. You could get a project 98% of the way there, but you couldn't finish it. We needed people who could understand deep science, deep mathematics, and solve real problems in those fields, but could also finish a project, and write about it and defend it.
Michael Eisenberg (06:04)
That’s super interesting. What motivated you? So here you are in the IBM lab, working on NLP, what we call now ESNLP. And you get a call to go to a, what we now would call a quant trading firm. Why would you do that?
David Magerman (06:23)
I didn't. I said no.
Michael Eisenberg:
You said no.
David Magerman:
The first time they interviewed me was, I just started a job in a research lab in Cambridge, Massachusetts. And I was excited about building a big research group and conquering natural language. And I was doing well. And Peter called me up and said, “Hey, come out here and interview.” So I drove through Connecticut, and crossed the ferry, and went to Long Island and interviewed. And they offered me a job.
And I said, “I would love to do it. It sounds really interesting. But I just moved out to Cambridge. They paid $7,000 to move me.” I felt like that was like a fortune. And like I couldn't, I felt that was such an investment in me. I couldn't renege on the promise to do the work. And so I said, “I would love to do it, but I really want to stay with my commitment at the company I was at.” And Peter said to me at the time, “You're going to realize this is the dumbest thing you ever did.”
And then six months later, when the two really great scientists we tried to hire went to better labs and didn't want to come to work with us, I realized I was not going to be able to put together–you know me with my fresh PhD, was not going to be able to put together a world-class lab in them in my little company in the corner of Cambridge. And I called them back up and I said, “Is the job still available?” And they played with me. They were going to hire me no matter what. But they said, “I don't know. We've got other people interviewing. We have to talk to somebody. Why don't you come in again? And then we'll decide at the end of the week what we think of you.”
And they were just toying with me. They were, they wanted to hire me, and they offered me a job. And it actually was a blessing, because the first job they were offering me was going to be in California ,working in the production group. The production was done by a different manager in California. And in between the time that I said no and the time I said yes, they had moved the production group from California to New York, and the guy who ran it had quit. So they allowed me to come to Long Island, which I was still doing production, but I got to be in the room with all the scientists, all the researchers. And so even though my job was to write software for production, I had my fingers in everything. I was learning the whole system, finding out how everything worked, and really trying to be a nuisance in everyone's business, trying to learn how the Renaissance models worked.
Michael Eisenberg (08:42)
Did you realize what you were getting into? Like, that this was going to become what ended up being known as Renaissance, probably the most successful hedge fund of all time? Or were you just kind of going to do–
David Magerman (08:51)
My whole career, I've just kind of floated. When I went to Penn, I was just studying computer science. Actually, I started out as a political science major in the college, and then switched to religious studies. Even though everyone knew that I was a computer scientist. Back then, it was nerdy. didn't. It was like, you could get beat up for being a computer scientist. So I eventually switched to computer science.
Michael Eisenberg (09:14)
Did you get beat up?
David Magerman (09:20)
In middle school, a bit. I hid out in the computer lab. We had TRS-80s with 4 kilobytes of RAM, and I wrote little games on the TRS-80 back in middle school.
Michael Eisenberg (09:28)
I don’t think you're that much older than me, but there were already, like, Apple II's in mine and I had a Commodore 64, a Jera.
David Magerman (09:34)
So I'm 57.
Michael Eisenberg:
Three years before me, okay.
David Magerman:
So it's like, in the late 70s, I went to a gifted elementary school program where they did things like taught you bridge as a class. And they actually had a stock market class, which I cheated on to do well. In terms of, if you pick the stocks at the end of the program and just fill in the data backfilling, you do really well. Don't tell my teachers, they're gonna flunk me out.
Michael Eisenberg (10:00)
That's the opposite of front-running the market.
David Magerman (10:03)
Exactly. But one of the things they offered was computer science. You could learn to program. And so my mother was taking, you know, whatever,10-year-old me to the local community college to sit in front of a terminal and write basic programs as part of my curriculum in elementary school. And so I learned a program. They eventually scrimped and saved and bought me a computer, where, back then the programs, the disk drives were a cassette tape. And you recorded the programs on the cassette tape in audio. Kinda like War Games. They have, like, the signals in audio and you have like–
Michael Eisenberg (10:39)
For the young people listening, War Games was a movie.
David Magerman (10:42)
Oh, sorry, yeah. Yeah, and Ally Sheedy. Early crush.
Michael Eisenberg (10:44)
It’s actually a great movie, with Matthew Broderick, I think. And it's actually a worthwhile movie even to watch today.
David Magerman (10:53)
For sure. I've watched it in the last two years, I probably watched it. But, so when you recorded your programs, you had to remember to print them out also, because eventually if you recorded over things over and over again, eventually the tape went bad, and they had to type it in again. So limited the progress. But, I'm 10, 11-years-old, so I'm not really ambitious about what I'm writing. Anyway, that was my introduction to computer programming.
But anyway, getting back to Renaissance–what was…?
Michael Eisenberg:
What I was asking was, did you really understand what you were getting into?
David Magerman (11:52)
Oh, right. Right. Yeah. So, no, I mean,I was telling a story like, so when I was at Penn, I was just in a computer science class, and the professor saw something in me and said, “Do you want to do research as an undergraduate? Like, you know, and he befriended me and he said, “Pick a lab and I'll make an introduction.” He probably wanted me to pick his lab.
But I didn't. I picked Mitch Marcus's lab doing natural language. And then, I wrote up my thesis, got published, my undergraduate thesis. And when I gave the talk, someone at IBM Research saw it and said, “I can't have someone outside of IBM doing this kind of work. We own the market on this work.” And so he offered me a summer job. And that turned into my thesis research position. And then I ended up at Renaissance, and I was just going to work for a few years, make a few million dollars, go back to academia and hopefully become a professor, become a successful professor.
I was trying to get a job at Johns Hopkins, maybe back at Penn. And I was there just having fun. I worked in the lab, I slept in the lab, I had a sleeping bag next to my computer. And I just wanted to work hard, learn, do really well, and then go back to my academic research. But you just, it's, you never leave. You know, it's such a-
Michael Eisenberg:
Why?
David Magerman:
Because it's like working in a grad school with a really, really good snack bar.
Michael Eisenberg (12:58)
Sounds like Google.
David Magerman (13:18)
It's a lot like what Google–it was Google before Google. You know, the kitchen grew and grew. They eventually built a sports and a closed sports facility so people wouldn't leave campus to play sports. Everyone lived in really nice houses within a mile or two of the office. They really wanted everyone just to stay and work.
And I loved it. They were like it was like a really collegial environment. We fought. We argued. We discussed. We shared ideas. And it was like–we were solving completely worthless problems. I mean, we were doing absolutely nothing for humanity, but it was hard. It was competitive. We knew the world was trying to beat us, and we were beating them, and we just wanted to stay ahead. So it's a fun game to play every day. Try to outscore everybody, where the score was millions of dollars. And I was with, like I said, people much smarter than me. I learned so much from the people around me and they thought I was smarter than them, because I could write software better than them. They could think of ideas I could never think of, but I could build the things they thought of and they couldn't.
Michael Eisenberg (14:00)
Sounds like when you say, when you thought you were there for a few years to make a few million dollars, you knew what you were getting into there, that this was a juggernaut of a fund.
David Magerman (14:08)
No, no, I mean, what I thought I could make in three or four years, I could make in two months. And, as things got underway. No, no, none of us understood this. When we started working there, a good day, like a great day, was making twenty million dollars in a day. We had this policy where if you made twenty million dollars in a day, they'd buy you lunch the next day. And we had champagne at four o'clock.
And if you lost 20 million dollars a day, they'd do a random number generator, and one of the employees would have to buy everyone lunch. That was the game we played. But 20 million dollars was a lot of money to make. By the end of my time there in the mid 2005, 2006, 100 million dollars, 150 million dollars was a good day. So like the scale of it, we never thought that the capacity of the markets was that big, because when you couldn't do it in equities–equities had so much more capacity than currencies and commodities, and they were also at the time less aggressively traded.
Michael Eisenberg (15:14)
Right. I think, it seemed it was being less mathematical at the time, right? More fundamental research. Commodities and currencies tend to have more, let's call it, mathematical bend than...
David Magerman (15:24)
And there was this whole efficient market hypothesis that said that you couldn't–people told Jim, “What you're trying to do is impossible. You're guaranteed to fail. Like mathematically, you'll fail.” And we obviously proved them wrong.
Michael Eisenberg (15:37)
And what's your takeaway from that? It must be like a deep takeaway from the fact that here comes Jim Simons, and he says, “I'm gonna use math for this,” and it conflicts with the efficient market hypothesis, which I think has been proven wrong in many ways. But that's daunting on some level.
David Magerman (15:55)
Yeah, I mean, what it proves is that–and Renaissance was very much a startup. When I got there, even though it was managing half a billion dollars, it was 20 somewhat people. And we were in the early days of a field. What it showed me is that you have to have conviction. You have to know what your edge is. And then you just have to have maniacal conviction. And when something goes wrong that doesn't break your confidence in your thesis, you just have to figure out what's wrong and fix it.
And there were a number of times where we had losing streaks, and things weren't working right. And Jim just said, “Is the math right? If the math is right, it's bad luck. But if it's not bad luck, the math isn't right.” And more often than not, the math wasn't right. We had either a bug or some assumption we were making mathematically that was crude and inaccurate. And sometimes we had bad luck.
The NASDAQ crash was pretty tough on us. You know, certain big market moves were, although the recovery from it was amazing for us. You know, the big moves, we lose a lot on the move, and we make a lot on the chaos of the recovery.
Michael Eisenberg (17:02)
Did you have the conviction that Jim had?
David Magerman:
More naively, because he knew like a million times more math than I did. But I believed that whatever I did was going to work eventually. And if something was going wrong, it was my fault and I had to fix it.
Michael Eisenberg:
Where’d that come from?
David Magerman (17:23)
I mean, my childhood. Growing up lower middle class, growing up–they didn't talk about it then, but I was probably on the spectrum, in that I had no social skills. I didn't understand people, and I was like a real loner, and I just needed–I was so unhappy. And the only thing that made me happy was success, you know, accolades. Being given, a math prize or a science prize or winning a Latin competition. and so. Yes, I studied Latin when I was in middle school. It's actually a great place to meet girls. Not a lot of guys go to these things.
But I learned that, to be happy, I had to be successful. And so I just had to–when it wasn't working, I had to figure out how to fix it. And you get a lot of pats on the backs for fixing things that seem like they shouldn't work. So when you solve a problem that other people think can't be solved, you know, they give you a good pat on the back and.
That pat on the back would get me two weeks.
Michael Eisenberg (18:21)
Two weeks.
David Magerman:
Two weeks of happiness, and then gotta go back and make another hit.
Michael Eisenberg:
Must have been difficult to be around, if you're only happy for two weeks.
David Magerman (18:28)
Oh no one liked me. No, I actually–I thought I was loved. The secretaries loved me because I was polite. But when I left Renaissance in 2006, and my team, you know a group that I've managed that took me–they had a party, a dinner, and that was when they they gave me gag prizes. And I won't repeat all the things that they had on, but basically the theme was, you are a bear. You are a monster. We were terrified of you.
Apparently–I didn't remember doing this, I'm sure I did–apparently I threw a monitor across the room once, when we had some problem that was my fault. Like, I wasn't mad at anybody else, but if my group did something wrong, it was my fault. And I once threw a monitor across the room in rage over what had happened. I thought I was a little teddy bear, I didn't know. I was little old me, I was the shortest guy in the company probably, and I didn't think I was that scary, but I was terrifying, apparently.
Michael Eisenberg (19:22)
Is that a lack of self-awareness?
David Magerman (19:24)
Yeah. So I did something that I think every manager should have the opportunity to do. I quit Renaissance in 2006. 2008, moved to Philadelphia. And then when my noncompete ran out, I was bored, because I needed to work. And I decided I could get my interview suit on and go and interview on different firms, because I was free to work anywhere, or I could put my jeans on and go back to Renaissance and say, “Hey, guys, would you mind if I worked from Philly full time and just write code?” You know, find signals, research. And they said, “Great.” And they paid me what I wanted to be paid.
But they said, “The only rule is..” I said, “The only rule is, I want to make this amount of money, and I never have to come into the office.” They said, “We'll pay you this amount of money. The only rule is, you can never come into the office.” Because as much as I didn't like them, they didn't like me. And it was great. I worked remotely. I didn't answer the phone. I responded to emails. A few people I spoke to, and I was pretty productive.
But one thing that happened was, because I came in as a researcher, as effectively a junior researcher–I was now working lower in the management chain from people who used to report directly to me. And the vindictiveness of the people trying to undermine my ability to get anything done, any chance they had to stop me from productivity to to cut off, to reject my work, to reject the code I wanted to put in. It proved to me how nasty I must have been to have created that much enmity, that much anger. And it taught me a lesson that, even though we accomplished so much, it wasn't worth it.
I could have accomplished 97% of what I accomplished and been a much nicer guy. And seeing how being a harsh taskmaster leads to such long-term–and this was like 10 years later, people were still giving me a hard time, and nothing I could do would really change their bitterness toward me–and it really left me at the, very end when I eventually got fired from Renaissance for saying some things that I can't talk about now. I was under attack from every angle. I was at war.
And I was in a horrible mental space. And the people that I spent my entire adult life working with, 25 years, including IBM.
Michael Eisenberg (21:56)
Those people stayed there a long time also.
David Magerman (22:12)
Yeah, everyone stayed. These are the only friends I had. It turned out I had no friends. Not a single person called me up to say, “We can't talk about shop, but how are you doing? How's your family? How are you handling it?” Not a single person.
And that taught me something that I'd started to learn, because I'd started becoming religious. I started studying Torah. But I realized that at first I was mad at them, that they had wronged me by not being there for me. And then I realized, no, I didn't do what I needed to do as a person to make friends. I was a colleague. I created symbiotic relationships with my coworkers and they loved that. But the moment I wasn't relevant to them financially, professionally, I didn't exist as a person. And that was on me. And I realized that if I want to have relationships, it's a two-way street. I've got to care about their families. I've got to care about their well-being. I got to push for the success of people that work for me, even if that means they're not working for me anymore. If they graduate to a better position. And I needed to really evaluate how I looked at the humanity of the business world that I was in.
Michael Eisenberg (23:07)
Was there a specific moment that that kind of dawned on you?
David Magerman (23:09)
I mean, it was when I was in the trenches of this fight, a fight for my financial life, against my former employer, as I was threatening them, they were threatening me. They were trying to claw back every dollar they'd ever paid me, because I forgot to fill out some compliance form in 2002. And I called up one of my longtime friends just to check in, just, “How are your kids doing?” And he wouldn't take my call.
And this is someone that I'd like, I mean, I'd seen him before he knew his wife. He certainly, you know, saw my kids grow up. And it was just really very telling that the people that I thought were in my life were just ornaments. And I was just an ornament to them.
Michael Eisenberg (23:58)
You think it's like that across funds in Wall Street, or is it unique to you or to Renaissance, or…?
David Magerman (24:05)
I think it's like any, it's the human condition. There are people who are driven. Raising my kids, I definitely let them know the feelings I have about this so they can choose for themselves how much they want to maniacally succeed and achieve, and how much they want to have a community and life, and let them choose for themselves. Because I don't, it's not like I say I deeply regret everything I did. I just wish I'd known the consequences of it, which was having no relationships as a 40-something year old guy, and having to start then and build a community. But I just think that it's something that is just a part of humanity. And this is really like, we're studying Torah, especially studying Hasidus and Tanya, helps me to understand that Jews are more than just apes with language. Like, you know, I mean, we have a godly soul. We have a purpose in the world, and we have a purpose for each other, that we collectively are a soul. We all have a piece of God, and that your piece of God and my piece of God are roughly the same, and I have to care about you as much as I care about myself. And that's not just words. It's not just showing up for a minute. That's like in all aspects of it. And I think realizing the connectedness of the Jewish people has helped me understand that I have to look at the human beings in my life and view them as, not just like my brother or my friend, but as myself.
Michael Eisenberg (25:33)
That's amazing actually. You said before that you thought, at some point, that what you had done for all these years was worthless. What did you mean by that?
David Magerman (25:42)
When 9/11 happened, it was a huge event. There were people on our trading desk who had phone lines directly to people who weren't there the next day. And it was really very emotional. I was going on and I thought, you know, we have the most brilliant minds in our company. We had, you know, probably at that point, like 30 or 40 PhDs who had deep knowledge of cybersecurity, and coding theory, and information theory. And I said, “Why don't we take some of our team and take a break, and offer our services to the NSA, to the CIA, to help build better systems for detecting these kinds of attacks being communicated?” And they laughed me out of the room.
They said, “We are not here to help America. If you want to do that in your spare time, go to it. We're here to make money. We're here to make money for investors.” And eventually, we were just there to make money for ourselves, because we kicked our investors out of the money-making parts of the company. We still offered them parts that didn't make money to the other people. But the parts that made money we kept for ourselves. And basically we were just an investment club of people using our brains to make money, for money, for a small group of rich white people. And given the capacity that we had for solving problems, in the end it was a huge waste of human–the teachers that collectively taught all of us how we did what we did would be collectively rolling over in their graves to know that all of their effort to teach us led to just money.
Michael Eisenberg (27:21)
If you had it back, what would you do with those ears?
David Magerman (27:22)
So people ask me this question, I think that-
Michael Eisenberg (27:25)
What, I’m not the first to ask you this question? I take it back.
David Magerman (27:51)
Actually, I speak to student groups, I speak to young adults, professionals, and I can't say I would change anything. Because if I didn't make the mistakes I made, I wouldn't know what I know today. And I feel so much more empowered to do things today because of all the mistakes I've made. And I think what's going on now, especially today, this year, these years, is so important, and so critical that I'm glad I made the mistakes then, when less was at stake, and now I can use the collective wisdom that I've gained from all my mistakes to do the right thing.
Michael Eisenberg:
Life's a journey.
David Magerman:
Life is a journey.
Michael Eisenberg (28:04)
Speaking of the journey, I actually want to contrast two things. Somebody, one of our crack team mentioned that you sold candy.
David Magerman (28:13)
Wow, when did they find that? At some point. That's great, yeah.
Michael Eisenberg (28:16)
Was that like in high school?
David Magerman (28:18)
Yeah, so, I mean, you remember in high school the clubs to raise money would sell candy bars. That was one of the things.
Michael Eisenberg (28:25)
In elementary school we had that.
David Magerman (28:46)
It was what we had in high school, and the clubs raise money for prom to raise money for sporting events. The sports teams would do it. They would sell, you know, lemonade. But they would sell candy bars. And I figured out that if you went to the local pharmacy, they sold candy bars for like 17 cents a bar, and you got 50 cents a bar at school.
Michael Eisenberg (28:52)
You bought retail and sold super retail.
David Magerman (29:18)
Right. Well ,what happened was the clubs would buy Snickers bars. Yeah, so if you were like, you know, doing it for the track team, everyone was selling Snickers bars. So I had a menu. I had 15 different kinds of candy. I still to this day have the spreadsheets that I–I had daily graphs of how many of each bar that I sold, and I knew that like the Twix bars were not selling well, but the Milky Way was a hot item.
So I'd stock up on the Milky Way, and I'd say, fewer Twixes, and I knew certain classrooms, certain teachers knew I was coming around so they would come next door to my classroom when I was there, and buy some candy from me and made sure I had what they wanted, give them their fix. And it was all going really well until this Russian kid, who was much more sophisticated than I was–he had people working for him. He would sit in the parking lot in the morning–
Michael Eisenberg (29:50)
He was the Tupperware guy.
David Magerman (30:15)
Yeah, he had like, you know, had guys in the parking lot. He would open the back of his trunk and pull out bags of candy to give out to his sellers. And they would go in different parts. It was a big school. There were like 3,000 kids in the school. And they would cover the market. And when he found that I was doing it, he came to me with one of his goons and said, “You should work for me. You should work for me.” And I said, “No, I'm happy just doing what I'm doing.” And so he, you know, he was not very polite about it, and I felt a little threatened.
Michael Eisenberg (30:23)
Is that when you decided to stop being an entrepreneur, start being an investor?
David Magerman (30:26)
No, but before he had a chance to really do damage to me, he got caught, and he said he worked for me. And because I was that guy in high school that I was at the rest of the time, they believed him, and they were happy to punish me. So they suspended me. And I learned a lot about recidivism because in indoor suspension, I was in suspension for-
Michael Eisenberg (30:48)
For those who don’t know what recidivism is, recidivism is the propensity of somebody who, for example, you put in jail to go back to being a criminal. Right.
David Magerman (30:53)
So in indoor suspension for three days, I got off one day for good behavior, so I did two days. There was this really cute girl with dyed hair in front of me who effectively offered me to be part of a drug ring. She wanted me to ride my bicycle into Miami and sell cocaine. I didn't consider it. She was cute, but not that cute. But it just goes to show, because they knew I was caught for selling illegally, they didn't know I was selling candy. So they thought I was a prime candidate. I said, I'm sorry, I don't drive.
That's great, you ride a bicycle, that'll be perfect cover, no one will ever suspect. I said no.
Michael Eisenberg (31:27)
That kind of candy. Because the other thing we turned up is that after some trip to Israel when you were much younger, 10 weeks I think you spent in Israel, you, at the time, wanted to become religious, but you didn't because your mom deprogrammed you or something like that. Tell us about that. Because if she didn’t save you from going to sell candy or getting beat–
David Magerman (31:52)
No, my father would drive me to the drugstore for that. You know, immigrants' lives are hard. My grandparents came over and were different levels of religious. And my grandfather was an Orthodox, Torah observant. And my mother grew up feeling like his orthodoxy destroyed their family. And there's a lot of water under the bridge there. But basically, she had this image that being an Orthodox Jew was stifling. It was a cult, and that it made no sense, and that if I got too deeply immersed in it, I would get swallowed up by it and never come out, and it would ruin my life.
So she made a very big point of telling me that she had nightmares I was frozen in a freezer, that I was in a block of ice, that it was going to ruin me. And I love my mother, and I respected my mother. And it was meaningful to me that she was objecting to it. And it eventually
succeeded in pulling me away from it. I mean, I was not, I wasn't wearing a black hat at that point. I was still in the earlier stages, but she succeeded in pulling me away from it at that time.
Michael Eisenberg:
How old were you then?
David Magerman:
I was a senior in high school.
Michael Eisenberg (33:02)
You were in high school. And where'd you come in Israel for those 10 weeks?
David Magerman (33:05)
Alexander Muss, High School of early days. I was in a public school. I was the first group from that public school to get permission to do the program.
Michael Eisenberg (33:14)
What's one thing you took away from those 10 weeks in Israel?
David Magerman (33:17)
I mean, God spoke to me when I was there. The first time I stood on Mount Scopus, and I saw Jerusalem, God spoke to me and told me that I should keep the mitzvot, that I should learn and keep Torah, that I was a good person, but that I needed to be a good Jew. And that meant keeping the mitzvot. And it really taught m,e and the program itself, and the rabbayim that I met there, the teachers, some of them were rabbis, some of them weren't, was just the holiness of the land. That, the time I was in Israel, I was a different person. It activated a part of me that got watered down, and eaten away and eventually covered over by the time I got back to America. When I got back to America, what was exposed in Israel, the spirituality, the connection to God was eventually covered over and stifled.
Michael Eisenberg (34:06)
When you finished at Renaissance in Long Island, why'd you move to Philadelphia and not to Israel? I get the Liberty Bell and you know, it's got a verse from the Bible on it, but...
David Magerman (34:17)
No, no, it had nothing to do with that. I had a standing offer from the Dean of the engineering school who I'd been friendly with–
Michael Eisenberg:
At Penn.
David Magerman:
At Penn, on the board of overseers of the school, that I could come and teach there if I wanted to, but I really, I wanted to be close to New York. My wife's a New Yorker. I like New York. But I felt trapped on Long Island. So I needed to find a place that was within reasonable computing distance of New York, but was kind of inland enough that I could escape if there's ever like a hurricane, a disaster, got a bit terrorism, I could just go west and not have to get on a boat. Because, if you get on the highway in Long Island, if you've ever been on the LIE, if everyone's going to something in New York, you're not going anywhere. And the public transportation, it's not like what it is in Israel.
Michael Eisenberg (35:09)
Don't take the jitney. What's it called, the jitney?
David Magerman (35:14)
Yeah.
Michael Eisenberg:
Don't take the jitney. You're not gonna get there very quickly. And so later in your life, you kind of come back to religion, which I guess has not ruined your life.
David Magerman:
No, it's, thank God I found a connection to Judaism, to Torah, to God, and really was actually a cousin. I have a cousin who lives in Harnof. I just visited him.
Michael Eisenberg (35:33)
That's an area in Jerusalem.
David Magerman (35:48)
Yeah,I just visited him before this meeting. We're very close at the time. I actually wasn't speaking to him, but he didn't know that, because he was a Haredi guy. So he didn't know. Yeah, he didn't know that I wasn't talking to him because–it didn't really matter, but he invited his whole extended family to come to Israel for his oldest son's bar mitzvah. And it was like a 12 day trip. And I said to my wife, we had two young boys and she was pregnant with our third. And I said, “Trip to Israel. Let's go. What the heck.” And I saw, you know, he had this large family and even larger community. He was very much like the mayor of the town, very, very central.
And I was there for meals for lunches and dinners. And every day was this group of loving people who loved each other, who loved Torah, 10-year-olds who are like answering Torah questions. And I thought it was just such a beautiful lifestyle that I was really missing. And I thought, how can I–I really felt like the dumbest guy in the room, because these 10-year-olds knew Torah that I didn't know. And not that I was expecting to know Torah, but I didn't expect to like have these kids knowing so much more than me about my tradition, my culture, my history. And a friend of my cousin saw me and asked me what I was upset about. I told him and he said, “Go back to him. When you go back to America, there's a program called Partners in Torah. You call them up. They match you up with a partner, and you can learn for an hour a week on the phone anything you want about Torah.”
And so I did that. I started that out and it was a gradual path over the course of seven years.
I couldn't tell you when I became Torah observant. Sometime during that seven year period from learning things that I wasn't doing to sort of trying a little on for size, to then pretending I was Orthodox and seeing what would happen if I tried doing some of these things, to eventually just realizing I believed it, the Torah was true and that my life was meant to be lived within within the bounds of Torah.
Michael Eisenberg (37:38)
For a guy who's like an achiever and wants the pat on the back that you achieved something–this feels like a pretty frustrating pathway to kind of do this, right? It's starting at the bottom again on some level in your own personal life. you know to kind of start to where you say a 10-year-old knows more than you do. How did that feel?
David Magerman (37:57)
Well, you've never met a Kiruv Rabbi then. Kiruv Rabbis give you like you say like, you pronounce a Hebrew word almost correctly and like you've just, you know, gotten a Nobel Prize. The Kiruv Rabbis are really good at making you feel really good for doing simple things. So no, I definitely got a lot of emotional support from the Rabayim for doing simple things. And then eventually, you know, I convinced them to stop pandering to me, like challenge me, make me do something really good.
Michael Eisenberg (38:28)
Was there any dissonance, that you had this super successful business career, and engineering career and software career, and you feel immensely good, you know, solving hard problems. And at the same time, you're kind of finding a new world here that, you know, if you're alright, as you said, you know, they make you feel really good for doing simple things?
David Magerman (38:43)
You know, it's just two parts of your brain. It was like two parts of my life I really compartmentalized the time I was sitting down writing code, and then the times that I was learning Torah, studying with a chavrusa, and then eventually when I was actually practicing and davening and studying on my own. It was just two parts of my life and they really were independent.
Michael Eisenberg (39:04)
I want to go back to language for a second. You did NLP before NLP was a thing. You said you built language models before we knew what an LLM was, large language model today. How do you think about the journey from the 80s when you started in this area to where we are today? And where do you actually think we are today?
David Magerman (39:08)
It’s a great question. And I think that–I look at all the different aspects of my careers, starting with even my candy business, but all the things that I've done really are like meeting together at an intersection of what's going on in the world today. And all of my different learnings are really relevant. And the one thing that, when I was studying for my PhD, and you have to study the history of the field, I learned about the history of AI, and the computer science, and it is littered with over promising and under delivering.
Going back to like Eliza the chat bot in the 60s, where people thought, ‘Wow, this is like an intelligent agent, this is like artificial, this is real AGI! You can talk to it and it'll give you counseling advice!’ And it was not that. And people eventually realized the demo was just canned, and it was just a toy, and all through the years, even the research I did when I had these great results, when I look back on them, I was a horrible scientist.
My testing methodology was horrible. I probably wrote papers that had inaccurate results in them. You know, when there's no money on the line, you're just trying to get a paper published. I was never intending to do anything dishonest, but you just didn't have the same rigor when you're doing research. And I think a lot of the results that you show when you're not really solving a real world problem, you're just solving a toy problem or you're solving a, you're doing a demo–you could get away with a lot. And I think that back then there was a movement away from rules and moving towards data-driven analysis of things. But that was before the internet, we had a million words of data. And think about that. The amount of data these large language models are trained on, we had a corpus of a million words and that was like, wow, they're using a million words. How are they managing that? That's a lot of data.
So, you know, there were a lot of things that didn't work because we didn't have enough data. We didn't have enough computing power. But the basic ideas of what we were trying to do are very similar to what we're doing today. The biggest similarity is the fraudulence of the demos.
Michael Eisenberg (41:41)
Interesting point.
David Magerman (41:53)
My understanding was that OpenAI spent a year canvassing what are the hundred most common things people are going to try to do with large language models once we have them out. And they made sure those hundred things worked well in a demo. But those hundred things, writing Shakespearean sonnets? Not really very practical. Not a lot of business use case for that. And so while it was amazing that you could see, I could write a financial essay in iambic pentameter. Like, that's lovely, but that's never going to be something you're going to be wanting to do. But people assume, wow, if it can do that, imagine all the other amazing things it can do.
Well, it can't. It was engineered to be able to do those hundred things really well. And now we're seeing as you're deploying it, it was over-promised and under-delivering. And so I think that the experience, if you think AI started in 2021, you're going to have one perspective. If you think AI started in 2013, when deep learning really hit its stride, you're going to think another thing. But if you realize that AI started in the 40s and was a subject of science fiction before it was any kind of reality, and that there's been almost closing in on a hundred years of thought about AI, if you look at it from that historical perspective, you realize we're nowhere near where people say we are. We are in a bubble. The bubble's going to burst. And there's, maybe 5% of the use cases that are being promoted as being a solved problem by generative AI will turn out to be really valuable. This is a generational accomplishment, like the transformer models are a once in a generation advancement. That is amazing.
But when you take that and promise AGI, and don't deliver it, there'll be a lot of dissatisfied customers. And I think that's the big mistake we're making now. The one big mistake we're making now is that we're wildly over promising and we're going to disappoint. The other mistake we're making is that we’ve taken science out of the researchers’ hands and put it in the drug stores a decade before it's ready.
Michael Eisenberg (43:57)
What does that mean?
David Magerman (44:20)
Imagine if you took biological weapons, but instead of using them in level five containment environments, you just put them in CVS, and let people buy them and play with them. That's what we're doing with AI.
Michael Eisenberg:
Explain better.
AI has the power to destroy people's behavior to give people deeply wrong answers to questions that they're gonna depend on, and then make massive mistakes. If you give AI agents the ability to log into things, access your bank account and do things in the world, you could make mistakes at computer speed, which could do massive damage. And we're giving it away to children.
Michael Eisenberg (44:41)
You think currently AI is an adolescent, and yet we're putting it in the hands of everybody. And so it's going to make, it's going to create much more pain and destruction than productivity because–you think productivity is way out?
David Magerman (44:55)
No, AI is an automatic machine gun. And we're giving it to children.
Michael Eisenberg (45:00)
That's a pretty negative view of what we're at right now.
David Magerman (45:03)
It's–what we're doing is so irresponsible. And look, we do this all the time. And throughout history, we've done things. We've killed a lot of people. And, eventually after a huge percentage of population is dead, we learn our lesson. And we blew up, you know, whole cities in World War Two. And then we realized maybe we shouldn't nuke people. So, you know, we make mistakes and we learn from them. We're going to suffer, I think, enormous consequences. I don't know the scale of it. And then we'll, please God if we're still around, we'll learn our lessons.
Michael Eisenberg (45:32)
If we're still around, wow, that's pretty ominous.
David Magerman (45:33)
I mean, I'm being facetious. We’ll be around. But I think that there could be, the scale of the damage could be pretty extreme. Just the scale of the bubble, the financial bubble, I would say that the bubble that exists today is probably a factor of 10 to 50 of what the internet bubble was.
Michael Eisenberg (45:53)
Why do you say that? That's interesting.
David Magerman (46:01)
Because the internet bubble was startups. Startups that inflated like balloons. We have
trillion-dollar companies that are becoming four trillion dollar companies over AI. That level of inflation–
Michael Eisenberg (46:08)
Right. But also, driving the capex spend from cashflow rather than driving it from investment dollars. Like I was here for the dot com bubble. It was like VC dollars buying ads and then being recirculated into equity in these companies. The difference here is that this is free cashflow of mature companies, for the most part, funding these things, although now we're moving into the debt markets to start to fund a lot of this capex build out. And by the way, I'm in the camp also that we're in a bubble, for what it's worth. Not that I think it matters, but I think we're in a bubble.
And we're about to get to the next leg of the bubble, which is the debt markets funding all this capex buildout.
David Magerman (46:43)
But the problem is, it's monopoly money. It's these 10 companies giving each other billions of dollars. They're not really–the money you're spending on employees. That's money. But when you buy ten billion dollars worth of computers from Nvidia, and you invest in Nvidia, you're basically just paying yourself for equipment, and so you can overpay yourself, and it all kind of cancels out.
Michael Eisenberg (47:10)
I know, but that was one specific deal. When Meta is buying Nvidia chips or Google's building out their own chips and installing over buying some Nvidia chips, et cetera. Not all the deals look like this. Whereas in the internet time, think there were many more of these kind of funky wonky deals, which, it was circular dollars or VC dollars buying ads that bought equity in companies. There's been a couple of these deals–
David Magerman (47:36)
But right now, inferencing is a huge money loser. And people at startups are starting and mature businesses are building applications that are basically like using a sledgehammer to tie your shoelaces. I mean, they're using an incredibly complicated machine to do a very simple thing that you could do with a linear regression model. You could do with a deterministic algorithm. And they're only able to do that because it's being subsidized. And so if something costs a dollar and you can charge $3 for it, you can make a profit. If it's costing the company that's doing the inferencing $10, eventually they have to charge 15.
Michael Eisenberg (48:18)
Is OpenAI gonna need a bailout, you think?
David Magerman (48:21)
Open AI isn't a functioning business. I mean, the business model is so demonstrably a Ponzi scheme that I don't know when it ends. They're now petitioning the government to–
Michael Eisenberg:
Bail them out.
David Magerman:
Give them a trillion dollars–no, no, to invest in them. No, they're a national treasure resource that needs to be invested in by all of– All of the American people need to be taking their tax dollars and putting it in to save America.
Michael Eisenberg (48:49)
Do you want your tax dollars together?
David Magerman (48:50)
Oh God, no.
Michael Eisenberg:
I want to go back to your point. So on the one hand, this is a generational technology. At least I think it's a generational technology. It's a significant multi-order of magnitude improvement over what we had previously.
David Magerman:
For some aspects of the problem.
Michael Eisenberg:
Exactly, for some aspects of the problem. And I'd also argue that the internet actually didn't start working for people and create whatever it created over the last 20 years, actually until we had the bubble. Meaning the bubble was actually critical in ‘98, ‘99, 2000. I lived through it and the crash. In order to lay the fiber, enable people to build the businesses, et cetera. Then we had a 20-year technology explosion, I think it's fair to use Mark Andreesen’s line; software ate a large part of the world, and the economy and that was valuable.
So my own view of the bubble today is, it's a necessary step. That's a capital market step to lay the infrastructure that creates the AI productivity boom over the next 20 years, or whenever the bubble bursts in 20 years, that's when people actually reap the benefits of the lower cost, because it's all been subsidized, of compute, of bandwidth, of intelligence or whatever it is. I don't think we're really gonna get to AGI. I've been arguing with my partner here all the time about this. I think that's way, way, way out there, if ever. I'm actually a giant believer in human beings.
David Magerman (50:14)
I think Hassidus gives us an insight into why we're never going to have AGI.
Michael Eisenberg (50:18)
I'm in the same camp, by the way. I'm not sure about it because of Hasidic thought, but I'm in the same camp. More Maimonidean thought, but we'll get there a different time. When we don't bore a full audience of people about the difference between Hasidic and Maimonidean philosophy. So this is like a necessary step for us to actually get the giant productivity gains, and people lose a lot of money in these times. I mean, that's what happened in 2000 and in another time.
David Magerman (50:44)
If it were being deployed in the way the Internet was deployed, I would say you're right. And if also the starting point, pre-Internet was comparable to the starting point pre-gen AI. But I've been an investor in AI companies since 2017, and really I invested as an angel investor before that. And AI before transformer models was a pretty accomplished industry. We were solving problems in production, in enterprise environments, in small businesses, and in the world, with pre-gen AI technology.
There were aspects of problems we couldn't solve well. The pattern recognition was not at the level that you could achieve on some problems with transformer models. They were overusing deep learning. There was a lot of things they were doing, but the state of the world, the AI world before GenAI, before transform models and before ChatGPT was close to being really useful. And if we just take in GenAI and transform models and looked at what doesn't work well yet, let's fix those problems–I think we'd be in a very different place. But instead what we said was, wow, this thing can do Shakespearean sonnets. I'm gonna throw away the deep learning models, throw away linear regressions, throw away the dynamic programming solutions to hard problems, and just throw into a generative AI model and get an answer, just become a prompt engineer. And I think throwing away the powerful solutions that were almost successful has set us back a decade.
Michael Eisenberg (52:31)
You're a computer programmer, and certainly the one area where this is actually making a lot of progress quickly, and we’ve seen this in our portfolio companies, is computer programming.
David Magerman (52:38)
I would push back on that.
Michael Eisenberg (52:40)
Go ahead.
David Magerman (53:08)
Okay, go ahead. So we had a number of companies in our portfolio that said they were starting to use these copilot tools for accelerating their software development, and that they were seeing productivity gains and it was really helping. So there are two main problems with this. One is that they discovered that–actually three problems. One they discovered is that it could write junior level code pretty reliably. It was really bad at architecture.
Michael Eisenberg (53:12)
Yes.
David Magerman:
So it couldn't design things. So you could use it as a tool–
Michael Eisenberg:
To generate code.
David Magerman:
To generate code. But you couldn't design software systems within–people were using it to design software systems. And they were ending up with unmanageable architectures that weren't scalable. The second problem is code reuse and code maintenance, that if you use, if everyone in your company is using AI to generate code, you're not going to have the responsible coding practices that you typically have for human coders to be able to read and understand code. So if you have an algorithm that's used 14 times in your company, GenAI is not going to find the library that you've written once. They're going to write 14 versions of it, and they're probably not going to be the same. They're going to be, whatever the mood, whatever mood the LLM is in at the time is going to, or how the prompt is given to them. They're going to come with a different algorithm, and then you're to be maintaining 14 pieces of code that do the same thing.
Michael Eisenberg (54:07)
I assume there's some architect that sits on top of this that tries to, there's a human being architect that sits on top
David Magerman (54:12)
You're giving these, the teams deploying this a lot of credit. There are a lot of people who are just throwing this out at teams of programmers and saying, “Use this,” and, “We're firing half of you.”
Michael Eisenberg (54:23)
So you don't like your companies using co-pilots, or…?
David Magerman (54:26)
So they found out that they were actually, thought they were saving time, but after a few months, they realized that what they were getting was unmanageable enough that they had to go and redo it. They ended up losing time. Because they thought they were where they weren't, they actually couldn't satisfy some of their customers' demands, because they thought they had a problem solved. Then they had to take time out of their work schedule to re-do what they thought they'd done, which meant they missed deadlines.
But there's a third problem, which I think is really important, which goes across the board to all AI use, which is that whether it's in financial environments, whether it's in consulting, real estate– the best we can hope for, I think, from these tools is replacing junior-level work. The way that you get senior people, the way that you get your senior management team and your mid-level management and the people you promote–
Michael Eisenberg (55:23)
Is by being a junior.
David Magerman (55:47)
Is by having a bunch of people be junior, and then you figure out which ones are learning the field well enough to be qualified to be the next level. What we're gonna find in the next two years is, we're gonna find the best prompt engineers. I'm pretty sure the best prompt engineers are not the best managers of whatever area that you're in, and I think people are gonna find out that the promotion, the mentorship, the ability to create a mature workforce is going to be severely stunted by the use of AI.
Michael Eisenberg (55:57)
I agree with that. My friend Josh Wolfe at Lux has a similar view, I think, to yours. My own view is a little more bullish than that, but I basically agree with your thesis is, we're gonna have a hollowed out generation which doesn't get to be the next level.
David Magerman (56:12)
I will admit that I am a hardcore programmer, and I'm deeply offended by the idea that, so I'm like I'm biased. I understand. I have to like my partner has to like take what I say sometimes with a grain of salt, because I am very much a snob when it comes to coding, and I still think I'm right, but probably if I was wrong, I might still say the same thing.
Michael Eisenberg (56:32)
You have high conviction, like you said about Jim Simons. It may be a problem in the math. So if this was gonna go right, tell me what going right means in the world of AI. You have a negative-ish view. Okay, give me the opposite view. Steelman me the opposite view.
David Magerman (56:42)
You know, one thing I learned at Renaissance, and even going back to my natural language day, is that the driving force here is data. And it's not just data, it's information. If you had a trillion ones, you have one bit of information, maybe two if you know that's a trillion. But if you have a million characters of information, that inside of it has the secrets to the meaning of life, you've got information. And so, just because you have data doesn't mean you have a solution.
And if you have data that has information in it from an information theory perspective, there are dozens of mathematical algorithms and pattern recognition algorithms that can pull that information out and help you reduce uncertainty about the future. So we've become a one trick pony, where everything is just thrown into an LLM. I think success is looking at the data sets that you have, figuring out how much information you have and which information that you have is relevant to solving a real world problem, and then finding the right tool at the right level of complexity and cost to extract that information. You–looking at the stars, you need really, really the advanced math and a lot of computing power to track anomalies in space. When you have McDonald's and data from McDonald's, and you want to figure out how to price your fries versus your sodas, you probably don't need the power of a trillion parameter large language model to come up with the optimal pricing for things. So I think that the success in AI would be getting to a point where you have a methodology where you look at the problem you're solving, you look at the information and the data you have, and you find the right level of complexity of analysis to solve that problem well.
Michael Eisenberg (58:33)
What do you think of Meta and Google throwing tens of millions of dollars and more at some of these super high-end AI researchers?
David Magerman (58:40)
I think that it's necessary. It's not about the person. It's about the stock price. And that, if Metta doesn't steal someone from OpenAI, then people are going to question whether they're competitive in the AI space. But if they find someone that they tell the world is worth $125 million, they must be building a world-class AI team that's going to, like, generate AGI.
Michael Eisenberg (59:04)
Smoke and mirrors at $125 million a mirror.
David Magerman (59:06)
But the question I ask is, if you got a guy who's like a really brilliant 40-year-old accomplished, like you have someone that's just really done an incredible amount of work for the last 15 years at Meta, and he's maybe topping out–like, let's pretend he's making a million and a half dollars, which he's not. But let's say he is, maybe with stock options making that kind of low seven figures. How does he feel when this 24-year-old kid who hasn't done a thing in his life, but you know, get good scores in college, and solves one problem at one company, he's making, you know, 10 to 100 times what I'm making?
Michael Eisenberg (59:42)
What does a guy who comes with a great degree and spends five years of experience, comes to Renaissance make? Or what would he have made?
David Magerman (59:49)
I don't know, because I've been out of there for a while now.
Michael Eisenberg (59:52)
I'm sure it’s probably pretty high, right? Yeah, it's a lot of money. Doing something you thought was worthless, by the way.
David Magerman (59:55)
But you only make that money if you actually solve real problems.
Michael Eisenberg (1:00:02)
So here too, you make money in the stock price, so why don’t you pay him the money?
David Magerman (1:00:05)
Right, it could be anybody. It's not him, it's the fact that they're doing it.
Michael Eisenberg:
Got it.
David Magerman:
There are 30 people that they could steal from five different companies, and it doesn't matter which one you do, and it doesn't matter if he does anything ever.
Michael Eisenberg (1:00:16)
Right. It's not the performance actually of the portfolio, it's more the perception.
David Magerman (1:00:19)
I think it's a smart move. I think the hard part is managing the social dynamics. To figure out how to handle that person being productive, being a part of a team, when he's suddenly been handed a Lamborghini and everyone's driving Corollas.
Michael Eisenberg (1:00:33)
Why did you become an angel investor after leaving the hedge fund world?
David Magerman (1:00:36)
I started being an angel investor when I was in the hedge fund world, and I have the proud accomplishment that every single company I invested in went to zero.
Michael Eisenberg (1:00:45)
You have to lose money in this business to learn.
David Magerman (1:00:47)
Well, what I learned, I learned a lot of lessons–each company I lost money, and I learned a lot of lessons. The money I was investing was, I mean, I hate to say it this way, but it was de minimis. I just was making so much money doing my job that I could, you know, before I was giving tzedaka, before I was giving charity, I was just having fun, burning money in companies.
But I was actually, I'm the reason they failed. Because by giving them money irresponsibly and not overseeing it, I made them fat and lazy. And when they came to me, pretending like they, but for this mistake they would have succeeded and they just need another $100,000, I would keep giving it to them, giving them the drug and not really solving the problem. And I eventually realized that I was the single point of failure. I was the reason why these companies were failing.
Because if they took money from you, they would have had oversight, a taskmaster, quarterly check-ins, budget meetings. You would have found the inefficiencies. You wouldn't have given them money unless they reached the metrics. I was an irresponsible investor, but also I invested in science experiments. I loved the science. And this is why when I left Renaissance, or when I was kicked out, and I realized that I had a lot to offer the venture community doing work in AI and data science, because I knew a lot that they didn't know. I looked around and realized I needed a partner. I understood that my experience as an investor was completely irrelevant.
Writing code for a quantitative hedge fund has absolutely nothing to do with investing in startups. I had domain knowledge that very few people in the world had at the time, and probably still have some edge in that area. I needed to find someone who had as much of an edge in business modeling, identifying good founders, the human side of investing in startups. And so I happened to be in Israel checking out Startup Nation, and I was meeting with different VCs and lawyers, and one of the lawyers that I was meeting with said, “By the way, you really should join or start a firm. By the way, I'm doing the first deal for this new fund that just started up. It's a partner that's looking for a domain expert, a scientist to join his team. He's a real experienced venture investor. But by the way, he's not in Israel, he's in New York.”
So I went halfway around the world to Israel to meet my partner who's now, to this day, still my partner, who is working.
Michael Eisenberg (1:03:15)
Good things happen in Israel, though.
David Magerman:
It was amazing, it was meant to be.
Michael Eisenberg:
So, it says Differential Ventures, which is, I think the name of your firm, focuses on ethical, sustainable uses of data. I'm curious about two things. One is, what does that mean? And then the second thing is, do you think that has anything to do with either your experience of Renaissance or your religious journey?
David Magerman (1:03:39)
So what it means is that there are a lot of really profitable ways of using data in industry and enterprise environments that takes data that has no relationship to human beings, or that does no harm to human beings by using it. And you can help companies make better decisions in their workflows by using data from the world, and from their companies, to solve problems. You could also take human behavioral data–their location, aligning it with their ethnicity and their financial situation, their family structure. And you can make even more money convincing people to do things that are against their interests to profit the mothership. And I think that's what social media has done. That's what corporate America has done, that they are modeling people in a way against their interests to–look, Renaissance makes money off of human psychology.
We know that people are going to overbuy something based on some phenomenon, and we buy it ahead of them, and then we sell it when it hits, hit where we think. Now we're not causing them to do it, but we're taking advantage of their mistakes. Social media and current technology companies are causing the mistakes.
Michael Eisenberg (1:04:50)
Their proclivities.
David Magerman (1:05:06)
They're goading people into doing things that are against their interests, knowing the profit from them. And I think that is an unethical use of data. So we see those companies, we could probably make our investors more money if we invested in them. We choose not to because I, know, from a religious perspective, I think that doing something that harms the world, I don't think God will let you succeed.
Michael Eisenberg (1:05:16)
There’s plenty of people who made a lot of money doing it, so they were successful.
David Magerman (1:05:19)
They were successful.
Michael Eisenberg (1:05:20)
You just said social media companies that think you're doing bad things. Mark Zuckerberg's been incredibly successful.
David Magerman (1:05:25)
If you look at the state of democracy in the world today, you can't possibly say he's been successful.
Michael Eisenberg (1:05:30)
That's a macro, right? I think this comes to the question of success, right? Which is that on an individual level, the guy who invented, or he didn't invent, actually, he was a latecomer. He followed MySpace and Friendster and a bunch of others. Perfected social media, which you think has caused a lot of damage, has made a lot of money and had a huge amount of impact on the world, positive or negative. On a macro level, democracy, to your point, I think also, I wrote this in my most recent book, is super challenged by this–social media and you know, the internet. And AI has also become used by autocracies to kind of put their thumb on humans. But the question is how you define success. Is Zuck’s success as an example, or is it society success, which I think you're headed to?
David Magerman (1:06:17)
Well, but I think it’s also the individual success. If you had five times as much money, would you be better, happier? Would your kids be better off? Would you be more of a success as a human being if you had five times as much money? Do you think?
Michael Eisenberg (1:06:38)
Do you think Mark Zuckerberg thinks he's a big success?
David Magerman (1:06:43)
I think he has convinced himself he's a big success. I don't know how he's raising his children. I don't know what morals he's passing on to his family. I don't know how he views his purpose in life, how he's serving God, how he's serving humanity, and what he's passing on to his kids. When I reached the pinnacle of success, what I thought at Renaissance in 2004, I had two young kids and I realized at some point, they were going to ask me some form of the question, what's the meaning of life?
And I realized that based on the way I was living, the only answer I could give them with a straight face was, ‘whoever dies with the most toys wins.’ Because that's the life I was leading. Everything was about succeeding, and making money, and making a higher salary, bigger bonus. And when I realized that, I realized that wasn't who I was. But then I didn't know who I was. And that's when I started on my Torah journey to learn that that's not the meaning of life. That's not why we're here. And I had to figure out what is–in the end, I realized I need to figure out what God wants from me. But I wanted to be able to pass on something to my children that would give them an accurate meaning in life. You know, I could have given them the meaning of life, of whoever dies with the most toys wins. And they could have been really good at that maybe. But that's not the right answer. And so is Mark Zuckerberg giving his children the right upbringing, the right message to make them successful human beings?
Michael Eisenberg (1:08:11)
I haven't seen Mark in about 15 years.
David Magerman:
I don't know. But you know, 50 billion or 20 billion or 500 million, I don't think that has an iota to do with being a successful human. I think that in many ways, I think it gets in the way of it.
Michael Eisenberg:
Has your study of Torah actually impacted your investment decisions and which companies you invest in?
David Magerman (1:08:34)
It's put my investing into perspective. So I've been a little bit upset, especially the last two years, about calling Israel the start-up nation.
Michael Eisenberg (1:08:45)
I'm with you.
David Magerman (1:09:02)
Israel is, the State of Israel, is the steward. If you ever read Tolkien, Lord of the Rings, the steward of Gondor was not the king. He was the family put in the place, to be the placeholder until the king came. And then Gandalf came and a lot of bad things happened. Midinat Yisrael is the steward of Eretz Yisrael.
Michael Eisenberg (1:09:14)
The land of Israel, yeah.
David Magerman (1:09:31)
But this is the holy land that God gave us, promised Abraham, Isaac, and Jacob–Joshua conquered it. The Jewish people lived here. We built the temple where the God's, the Shchina, God's presence was here. And our purpose in being here, as opposed to being in Uganda,
or Canada is that we are here to preserve this land. In the time we live in now, money is a big source of power. So I think our success as Startup Nation, how we keep creating these billion, multi-billion dollar companies, is a mechanism God is using to make us relevant and powerful in the world. It's not why we're here.
When I started, I was here on October 7th in Jerusalem and I flew back on Sunday after the attacks. And I immediately booked a trip back here six weeks later because I said, “I can't pick up a gun and shoot terrorists, but I have to do something.” And I had six weeks to figure out what I could do. And I came back. I was probably here every week, every month and a half throughout the war. And I met with both military people who, there was a lot of private public partnerships in supporting military operations. I met with a lot of community leaders, met with hospitals, met with universities. But I also met with startup founders, because I had a job to do and it was so depressing to meet people who are clearly emotionally affected by the war because they're here. You can't not be affected by what happened on October 7th. But they were also saying, “Why can't we just stop fighting ,so we can get back to making money in our businesses? Like, my employees are in reserves. They're fighting. I need them to come back and finish our projects so we can so we can be competitive. We're going to run out of money.”
It just broke my heart to hear startup nation founders and VCs thinking that we are here in Eretz Israel. We're here in the land of Israel to be startup nation. We had to take a break from that secondary role to fulfill our primary role, which is to create a safe home for the Jewish people, to make the land holy, and to eventually bring back, not the steward, but the king and bring mashiach. The idea that there were these really cynical people who I just felt were here for the wrong reasons.
Michael Eisenberg (1:11:33)
Is that easier for you to say at age 57 after a long career at Renaissance and investing your own money for a long time? You're in a different spot than these younger people who are doing this. My kids were in Gaza for, and my in-law kids were in Gaza for a lot, a lot, a lot of days.
David Magerman (1:11:50)
Yeah, my friends’ kids were there. I pray for them every day. My daughter made aliyah in February. She's starting Bar Ilan….
Michael Eisenberg (1:11:57)
But you look at someone who's like 30-years-old, know, spent 10 years in the military as the case may be, and he's starting his companies…I'm not suggesting–look, no one stood up and said, “Don't go to reserve duty.” We have a country to defend. Is it easier for you to say that to them given where you are at?
David Magerman (1:12:13)
I didn't say it to them and I don't blame them for thinking it. I just think that they're, I think that it's a byproduct. It’s a direct consequence of us mismessaging the value of Israel, and I think, I don't blame them. I totally got it. I just, my heart broke. I wasn't angry at them, I was heartbroken, because I feel like we are making a mistake by thinking that we are so successful in startups because we're really that great. It's God. And we've seen it in the war with Iran. We've seen it
in Iron Dome and the missile defenses. We've seen it in the tunnels. And I have friends who've been kind of almost live blogging their service where they have WhatsApp channels where they’re sharing their experiences.
Michael Eisenberg (1:12:55)
Not a great idea.
David Magerman:
But they're sharing that God is with them, that these secular soldiers are seeing miracles, revealed miracles in their battles. And I just think that we are miss messaging the strength of Israel and the power of Israel. When Moshe Rabeinu–
Michael Eisenberg:
Moses.
David Magerman (1:13:12)
Moses, lifted his arm up. When Moses lifted his arm up, and we won battles, and when his arm got weaker and it went down and we lost, that wasn't because the muscles in his arm were causing us to win. It was the mechanism of God giving this power to the Jewish people. We're experiencing revealed miracles today. And I feel like the messaging we've been giving the last 10 years about why Israel is successful is misleading, misleading people into thinking that we have to get back to starting billion-dollar companies. Otherwise, we're going to lose our edge in the world.
We have to stay united as a people. God is rewarding us for our unity as much as he was punishing us, you know, pre-October 7th for our disunity. And I think that we need to be recognizing that by staying united, by working together, by loving each other, finding ways to relate to each other when we're very different, but by figuring out what God wants us to do to protect the land, that is going to make everything else we do successful.
Michael Eisenberg (1:14:10)
I wanted to ask you one last question. So you started a foundation called Tzemach David, which for translation means the growth of David, or–
David Magerman (1:14:17)
The seeds of David. Basically Mashiach.
Michael Eisenberg (1:14:20)
Yeah, I got it. Messiah. And you're focused on education. And I know you've been pretty outspoken about what's going on in American universities over the last two years. And I’d just love to hear as kind of our last question, what's your view of, not just American universities, but actually what you're trying to accomplish through this for the future of education for Jewish people here in Israel, and maybe impact on the world?
David Magerman (1:14:48)
Right. So before October 7th, I was working on education in Israel for a few years, because we found that that Anglo Americans who made Aliyah who came to Israel to live had a lot of trouble integrating into the school systems, especially the religious Zionist community that was trying to learn Torah while they were trying to learn secular studies. And the schools were not well equipped to handle people who didn't speak Hebrew well and parents who didn't speak Hebrew well. And so I hired someone to run my foundation to try to figure out what do these schools need, what's causing the problem and what can we do to help them fix it?
So you don't fix it by creating American Jewish day schools in Israel. It's the last thing we need. We just need to find out what resources do these schools need to understand and staff the teams to be able to do a better job dealing with Americans when they come over, and helping them integrate. And so that was the mission, the sole mission until October 7th. And then when I got involved with the controversy at University of Pennsylvania, my alma mater, and became known for these issues at the universities. The first three trips I came back to Israel after October 7th, I went to universities. I went to Hebrew University. I went to Tel Aviv. I went to Technion. And I said, “What do you need to do a better job of integrating Americans into your schools?”
And the first thing they said was, “We have this English language program.” I said, “No, that's helping people come to Israel, study, live here for three or four years, and then not speak Hebrew and then eventually go back to America, or maybe go to Ranaana. But I have a lot of friends in Ranaana, so there's nothing wrong with that. But what I wanted to know is, how are you going to help young adults come to Israel and graduate as Israelis? And then get married and have kids who are Israelis?”
And we came up with a plan to have a mechina program, like a transition program. The first year they study their major in English. Introductory class is very easy for them to do. And you do an intense Ulpan, learn Hebrew. And the more you need to learn, the more intense it is.
So for instance, my daughter, I hope she doesn't mind me saying it. She did 12 years of Jewish, 13 years of Jewish day school. She was in honors Hebrew, and she tracked into the second lowest level of Ulpan. So the American Jewish state schools are, they're failing at teaching their students Hebrew. So she's now studying five days a week, three or four hours a day, intensely learning Hebrew while she's starting her college at Bar-Ilan University. And the hope is that at the end of the first year, she will have learned enough Hebrew that she can then finish the last two years of her education in Hebrew, graduate fluent in Hebrew in her field of study, and then she can go on to get a Master's degree or go practice her career. But the idea is, I don't want us to say, “Ooh, be afraid of Harvard, be afraid of NYU. It's scary in America. Mamdani, be afraid. Run away to Israel.” I want to look at Israel and say two things. “Israel is a gem. It's a diamond. Let's polish it. Let's take the beauty of Israel, the strength of Israel, and shine it up in a way that the world and Americans can see how beautiful what it already is is. And also, let's look at the problems. Let's find the things that Israel wants to fix about itself. That if you fix them, more Americans would feel comfortable here. Let's fix those things also.”
And if you can do those two things, if you look at what's going on in America today as a Jew, as a human being, but let's just start with Jews. If you look at what's going on in America as a Jew, and you look at what Israel is with an honest lens, if you don't choose Israel, stay. Don't come. I want them to come. But - they should choose Israel. If we're not doing a good enough job showing what Israel is, that's on us, but if we're showing them what it is and they're too American, too infected with the cancer of America to come, then let them. My new mantra is, let them.
Michael Eisenberg (1:19:05)
So when are you coming?
David Magerman (1:19:09)
So this is a question people ask me all the time. My wife, I have to say ‘let them’ to my wife. She does not understand. She doesn't see things the way I do. But my goal is that, I want all my grandchildren to be born in Israel. So my daughter just made Aliyah in February. She came to seminary. She applied, and by February she was finished. All through the war, all through the attacks going in and out of bomb shelters. She made Aliyah and then she just started Bar-Ilan.
One of my sons is planning on finishing at–he was at University of Pennsylvania. He transferred to YU. He wants to start this me smicha program–
Michael Eisenberg:
Rabbinic program.
David Magerman:
Rabbinic program, and start his Master's program, and he wants to move to Israel and finish them here.
Michael Eisenberg (1:19:53)
Grandchildren is the biggest attraction to bring people to Israel.
David Magerman (1:19:57)
And my oldest son who's very much embedded in New York right now, as soon as his siblings have children, he's going to be on a beeline to here.
Michael Eisenberg (1:20:07)
Turns out being an uncle also works.
David Magerman (1:20:26)
I think that the truth is I come to Israel, again, I come to Israel for a week every month and a half. And I don't think I'm helping Israel more by being here. I'd be helping myself a lot more every four Amos we walk is a mitzvah. Like I want to be in Israel, but I love my wife and I want to respect her. You know when I got married, she was more religious than I was. So I have to respect that. I pray literally three times a day, every day, that she changes her perspective on this, but until she does I'm going to be living in America and.
Michael Eisenberg (1:20:40)
Next time you come as a couple, have Shabbat at the Eisenbergs. We’ll see what we can do.
David Magerman (1:20:43)
Okay. Yeah, but that's why I don't live in Israel.
Michael Eisenberg (1:20:46)
David, thank you so much for doing this. This was super interesting. I mean, super interesting. And I hope people take away also just how much of a journey life is, and to kind of keep staying focused on doing the ethical and moral things as we go forward in life. Thanks, David. Appreciate it. If you like this podcast, please rate us five stars on Spotify and Apple podcasts, subscribe to the YouTube channel and do all the other great things that will keep our great content flowing right into your inbox.
David Magerman (1:21:03)
Thank you for having me.
Follow David on Linkedin
Subscribe to Invested
Learn more about Aleph
Subscribe to our YouTube channel
Follow Michael on Twitter
Follow Michael on LinkedIn
Follow Aleph on Twitter
Follow Aleph on LinkedIn
Follow Aleph on Instagram
Executive Producer: Erica Marom
Producer: Myron Shneider, Sofi Levak
Video and Editing: Ron Baranov, Nadav Elovic
Music and Creative Direction: Uri Ar
Content and Editorial: Kira Goldring
Design: Rony Karadi
Follow David on Linkedin
Subscribe to Invested
Learn more about Aleph
Subscribe to our YouTube channel
Follow Michael on Twitter
Follow Michael on LinkedIn
Follow Aleph on Twitter
Follow Aleph on LinkedIn
Follow Aleph on Instagram
Executive Producer: Erica Marom
Producer: Myron Shneider, Sofi Levak
Video and Editing: Ron Baranov, Nadav Elovic
Music and Creative Direction: Uri Ar
Content and Editorial: Kira Goldring
Design: Rony Karadi





















































































































































































