Elad Raz

Elad Raz, Co-Founder and CEO at NextSilicon, on Outperforming Nvidia on Efficiency, the Architecture No One Thought Possible, and What Happens When the Memory Bottleneck Breaks

How can values create value? On this podcast, Michael Eisenberg talks with business leaders and venture capitalists to explore the values and purpose behind their businesses, the impact technology can have on humanity, and the humanity behind digitization.

Subscribe and listen anywhere:

Elad Raz

Elad Raz, Co-Founder and CEO at NextSilicon, on Outperforming Nvidia on Efficiency, the Architecture No One Thought Possible, and What Happens When the Memory Bottleneck Breaks

How can values create value? On this podcast, Michael Eisenberg talks with business leaders and venture capitalists to explore the values and purpose behind their businesses, the impact technology can have on humanity, and the humanity behind digitization.

Subscribe and listen anywhere:

Elad Raz

Elad Raz, Co-Founder and CEO at NextSilicon, on Outperforming Nvidia on Efficiency, the Architecture No One Thought Possible, and What Happens When the Memory Bottleneck Breaks

How can values create value? On this podcast, Michael Eisenberg talks with business leaders and venture capitalists to explore the values and purpose behind their businesses, the impact technology can have on humanity, and the humanity behind digitization.

Subscribe and listen anywhere:

Elad Raz

May 6, 2026

Elad Raz

May 6, 2026

Elad Raz

May 6, 2026

Elad Raz

May 6, 2026
Subscribe and listen anywhere:
Subscribe and listen anywhere:
Subscribe and listen anywhere:
KEY TOPICS

00:00 - Intro - “10x Faster, Quarter the Power”

00:35 - Secret Supercomputer in U.S. Labs

03:27 - “Reinventing Compute” vs Nvidia

05:27 - Why the World Needs Infinite Compute

08:23 - Throwing Away CPUs and GPUs

11:25 - 30x–100x Chip Throughput Explained

12:47 - “What Jensen Missed”

13:00 - Why Dataflow Failed—Until Now

17:23 - Top 500 Supercomputer Reveal Tease

25:22 - Why Faster AI Gets More Expensive

30:33 - “AI Chips Are the Dumbest Idea”

33:49 - Why Transformers Won’t Last

34:02 - “Fastest Compute on Earth” Claim

38:36 - NextSilicon Is NOT a Feature

58:03 - Compute Replaces Food and Energy

On this episode of Invested, Michael Eisenberg sits down with Elad Raz, Founder and CEO of NextSilicon, for a deep dive into the future of compute—and why the AI revolution may be bottlenecked by the very chips powering it.

In this technical and provocative conversation, Michael and Elad unpack what it actually means to “reinvent compute,” why today’s dominant architectures from Nvidia, Google, and Amazon may be fundamentally limited, and how NextSilicon is taking a radically different approach—one that replaces traditional CPUs and GPUs with a dynamically reconfigurable chip architecture.

Drawing on Elad’s background in elite Unit 8200 and decades of experience in high-performance computing, he explains why most AI chips are solving for the present—not the future—and why the real breakthroughs may come from entirely new paradigms like dataflow computing. He also shares bold claims about performance, including running workloads up to 10x faster at a fraction of the power, and discusses NextSilicon’s role in building next-generation supercomputers for U.S. national labs.

They discuss:
- Why the world is entering the “age of compute”
- What Nvidia—and the rest of the industry—may be getting wrong
- The real bottleneck in AI: memory vs compute
- Why faster AI often means dramatically higher cost
- The difference between training, prefill, and inference architectures
- Why future AI models may look nothing like transformers
- The economics of chips, data centers, and trillion-parameter models
- How NextSilicon’s architecture could reshape both HPC and AI

Elad Raz is the Founder and CEO of NextSilicon, a deep-tech company redefining high-performance computing and AI infrastructure. A graduate of the IDF’s elite Unit 8200, Elad brings more than 20 years of experience in technology, entrepreneurship, and executive leadership. Before founding NextSilicon, he founded Integrity Project, which was acquired by Mellanox Technologies in 2014. Elad resides in Ramat Gan, Israel, with his wife and three sons.

If you want to understand where AI infrastructure is really heading—and why the next wave may not belong to today’s incumbents—this episode is essential viewing.

Please rate this episode 5 stars wherever you stream your podcasts!

No transcript found

[Michael Eisenberg — 0:00]

What did Jensen playing at Nvidia miss, and that you figured out that is so novel?

[Elad Raz — 0:05]

No one thought about it. We are now running workloads 10 times faster at half or even quarter of the power consumption of a GPU. NextSilicon is not a feature. At the beginning, most people don't understand the uniqueness around what we have built. What does it mean to program a chip for the future?

[Michael Eisenberg — 0:24]

If you're engaged in AI, this is just about the hottest topic of our time right now.

[Elad Raz — 0:29]

Absolutely.

[Michael Eisenberg — 0:30]

So the big national lab supercomputer, one of them is built on NextSilicon now?

[Elad Raz — 0:35]

Yes.

[Michael Eisenberg — 0:35]

And how powerful is it?

[Elad Raz — 0:37]

We have our own tricks over there and we are going to reveal them soon.

[Michael Eisenberg — 0:00]

Ooh, want to reveal them now?

[Michael Eisenberg — 0:00]

Welcome back to another episode of Invested. I'm thrilled to have with me Elad Raz, founder and CEO of NextSilicon. Welcome, Elad.

[Elad Raz — 0:00]

Thank you. Thank you for having me.

[Michael Eisenberg — 0:59]

Okay, we've done a few of these shows which I would call at the kind of more hard science, geekier end of the spectrum. This will definitely be one of those. So we should warn our audience in advance. Get ready to like, dig deep into the world of semiconductor, semiconductor architecture, data centers and the general need for compute in the world. Is that fair?

[Elad Raz — 1:21]

Absolutely. And tell you more. You know, when I first had investor meetings, you know, I started with the deep tech. I mean, whatever episode you had about science and, you know, getting down, probably to understand NextSilicon, you need to get deeper. But it's not a warning. I think that we can, you know, convey the message about what NextSilicon is in a more generalized way that everyone will understand. So don't be scared.

[Michael Eisenberg — 1:48]

Okay, so just to start with something a little lighter, tell everybody what was engraved on your first chip that you, that you produced or taped out.

[Elad Raz — 1:58]

There is a habit, you know, when you produce a silicon chip and that's like for the past 40 years that you can play around with the transistors and the etching on the chip. You know, the way that the photoresist mask is, is laid out and generate symbols. And many companies have done it. Pat Gelsinger for example, wrote his initials on the 486 and 386 — PJ and others put cartoons and train. And when we started NextSilicon and we came to tape out, tape out is the term that you take your design and send it to TSMC, you had to decide what you want to etch over the chip. And I thought about it really hard and I went to the shul [synagogue] and go to the rabbi and ask him, "Hey, what do you think?" And he said, "Why wouldn't you choose the first word of the Bible? Bereshit." Bereshit, in the beginning, Genesis. That's in the holy scripture, in the Zohar, it says that this world encapsulates everything that was and everything that will be. Sounds appropriate. So we have done it. And this is the first 7 nanometer smallest word of bereshit in, I think in the entire world.

[Michael Eisenberg — 3:11]

So given that it's everything that was and will be, that's quite a statement about compute. Let's just start in a reasonably simple place like why do we need, you know, why don't you give the, you know, three sentence pitch on what NextSilicon does. And then I want to talk about why does the world need so much more compute?

[Elad Raz — 3:27]

Sure, we are all about compute. Compute entails many, many different things. But in order to win that race, I mean, think about it for a second. You start a company that focus on compute. Accelerate compute. Accelerate compute for science, for AI machine learning, for all of those different aspects. You need to build the company or design the company and start the company to be novel, original and very different than other multinationals that spends billions of dollars on their design. You can't start a company that trying to copy and just do incremental improvement over the leading companies, these days is obviously Nvidia, spending tens of billions of dollars on every chip design. You need something else. So that's our goal, you know, reinvent compute.

[Michael Eisenberg — 4:24]

Reinventing compute. But what is that? Tell us in three sentences.

[Elad Raz — 4:27]

Sure, in three sentences. Computers until now work in terms of processor that read instruction, execute instruction, right? So you do it like serial way. And there is a way to increase throughput by replicating many of those processor calls. At NextSilicon, we work in a different way. What we are saying is we build a chip that has the arithmetic unit on the chip itself in some kind of a grid. Very hard to program that. But we find a way to use algorithm, software algorithm that understand what matters now and literally reconfigure the silicon. Etch whatever your workload, you are running, scientifical code, AI machine learning, into the different transistor and then stream the data in. That's like the scary part. But we are going to...

[Michael Eisenberg — 5:21]

We're going to unpack it soon.

[Michael Eisenberg — 5:23]

Now, why does the world need much more compute today?

[Michael Eisenberg — 5:27]

But you should explain it.

[Elad Raz — 5:28]

This question was asked every time in the past eight years from first computers ever invented. Why do we need that? Why we need the big machine that cracked Nvidia? Suddenly you got the Enigma, a machine cracked by computers. You can't do it handcrafted. So there was a lot of money driving into compute in the early days from Department of Defense and Energy. Even here in Israel, 1953 the Israeli state was 5 years old. Weizemann Institute – let's build the first computer based on the Enigma. The name of the computer was WEIZAC. It cost 25% of the Weizmann Institute that year. You know how many discussions there was with the first president, Chaim Weizmann, about why do we need that machine? Who is going to use it? Now we want to get like this genius Pekeris bomb to work on that machine. But who else is going to use it? And guess what? Two years after, when the machine was powered on, everyone just queued in line to use those machines. Even on weekends, on Shabbat, they had to bring like a goy [gentile] of Shabbat, someone to operate that machine. So for around the history there was a question why do we need more compute? Even when I started NextSilicon, I spoke to one of my relatives and he asked me hey "Elad, my computer works just fine. I mean, why do you need a stronger computer?" But the answer is, and especially now in AI, the answer is kind of obvious. You want to send an agentic task, let the GPUs and you know, hear the fan rolling out and then token is spinning on your monitor. You need that faster. The more compute you can generate, the better result that you're going to get from your queries. Whether it's AI or even science, fidelity on oil and gas and more.

[Michael Eisenberg — 7:21]

Okay, so if we look at the world. So this was eight years ago when you decided to do this. Today it's obvious, you know, eight years ago was around the founding moment OpenAI but nobody knew it existed. So at the time you made a decision, I think, and this is all above my pay grade, right, in terms of technological complexity, you made a decision to focus on HPC, high-performance computing. Try to tell our audience what was unique about the chip, your approach and what is unique about your desire to go after high performance computing. And we'll distinguish that in a second when you're done to what's going on in AI today.

[Elad Raz — 8:01]

So let's start with the underlying core technology. We have produced two generations of chip that accelerate computationally, period. The way that we are doing that is using software algorithm that change the wiring on the chip, rather than have a general purpose chip that can get instruction and execute them.

[Michael Eisenberg — 8:23]

Okay, stop for a second. Explain that to everybody. The difference between software that changes the quote unquote wiring on the chip versus a general purpose computing architecture, whether it's a GPU or a CPU.

[Elad Raz — 8:35]

So let's start with the basic. Okay. And I'm going to use a name that I never use when I talk to investors, which is John von Neumann. He's a big scientist that explains the fundamental of a processor core. So you have a processor core, it has input, some memory state and an output that get instruction. Those are processors of today. You get the instruction of what to do, that's the program of the user, and the device itself, just run the instruction one after the other. Processor got more complicated, more vectorized, meaning you can do more operation, multicore everything. You know all of that and we won't get into the details. So don't worry. Those are processor core, traditional architectures. With NextSilicon we just threw it away. We said, hey, let's look at the processor code. There are so many logic that decode the instruction and try to understand hey, what those instructions means. How can I fetch the memory in order to execute those instructions. And until I get to the actual unit that compute the, you know, the schoolbook, multiply multiplication or the addition, like 2% of the transistor of the chip is, in CPU, allocated for compute. GPU is way better. We said, let's just stick with those transistors that doing the computational. That's it. Now the problem is, okay, so how you don't have the instruction how can you tell those different units what to do? So the answer we, and this is the IPs that we have developed, is saying, okay, let's connect the different ALUs in some kind of a mesh and we're just going to reconfigure the mesh to move the wires from A to B. Still, how can I program this thing? And there are like two aspects. One of them is, okay, let's go to the sophisticated user and ask him to draw on a circuit board exactly what to do. By the way, this is what literally when Los Alamos developed the A bomb, that's exactly what they have done. They put like bunch of calculator women. They were women, that took like two numbers, did an operation and passed it back to the different O and then you get the entire computation. We decided to say let's understand the computational in autonomous way. I mean the user wrote the program in a C code, let's run it on a CPU, let's understand the different computational kernel and automatically, autonomously just wire them on the chip, move the memory and accelerate, accelerate those computation. That's what we have done. So instead of working in 2%, we are working at close to 60%.

[Michael Eisenberg — 11:25]

So 30 times the throughput on the chip.

[Elad Raz — 11:27]

Exactly.

[Michael Eisenberg — 11:29]

30 times. Three zero?

[Elad Raz — 11:32]

Even, we have some workload that you get to 100. The problem is Amdahl's Law, meaning you have overhead moving the data in and out that you need to think about it like pipes. Okay, you can, you can put like a wide, a wide pipe of water, but if you have like thin pipes entering, you can't drive the machine as much as you want. So Amdahl's Law is saying that you're probably going to get the bottleneck not in the computation, but in the I/O and the data, everything.

[Elad Raz — 12:01]

But yes, we are now running workloads 10 times faster at half or even quarter of the power consumption of a GPU. End to end.

[Michael Eisenberg — 12:10]

Right, compute per watt is up 40x?

[Elad Raz — 12:18]

Yes, 40x if it's 10x and a quarter of the power consumption.

[Michael Eisenberg — 12:22]

Gavin Baker, who by the way, is one of our most popular episodes ever. And for anyone listening now, if you haven't listened to our Gavin Baker episode, you need to listen to Gavin Baker episode. Gavin Baker's an investor in NextSilicon. And one of the things he said on our show was that what you're doing is incredibly novel. It's very, very different. Explain to our audience what is so novel about it. What did Jensen playing at Nvidia miss, and that you figured out that is so novel?

[Elad Raz — 12:47]

No one thought about it.

[Michael Eisenberg — 12:48]

What does that mean?

[Elad Raz — 12:49]

No one in academia, no one in industry. I mean, the concept of dataflow itself existed in literature. People tried to do that, but everyone failed.

[Michael Eisenberg — 12:59]

What is dataflow?

[Elad Raz — 13:00]

Dataflow is the way that you took the arithmetic unit and connect them together like delta that flowing through the device. That's the origin of a data flow. And every other architecture in the world today is based on... For example, GPU. There is something called SM. That's a processor code that run instruction and tensor cores doing matrix operations. Okay? We have just data flow. Just data flow. And some company tried 20 years ago to make this technology work. All of them failed.

[Michael Eisenberg — 13:35]

Who was that?

[Elad Raz — 13:36]

There were companies, I mean even HPE had the concept of the machine that there was like a memory in the center and then dataflow trying to fit it. Intel tried to do it with their spatial processor. I mean, many companies tried and the failure was because of the lack of programmability, the lack of software, it was very hard to program those workloads. Now the reason, let's take you back to the original decision. I mean why HPC? What is HPC? Why did you take the company into HPC. If I gave you like 10 minutes explanation about technology and I hope that you can have a skip button and just summarize a different.

[Michael Eisenberg — 14:18]

Okay, no people are interested. You know, this is this. You know, if you're investing in the stock market or you're engaged in AI, this is just about the hottest topic of our time right now.

[Elad Raz]

Absolutely, yeah.

[Michael Eisenberg]

Go ahead.

[Elad Raz — 14:27]

Nothing in the technology is oriented into a specific vertical. Not oil and gas and how you can find new pocket of gas and oil underneath like geoscience application beneath planet Earth. Nothing is related to space and have a prediction about how many dark matter there are and how galaxy was formed. Nothing about fluid dynamic that explains how the SpaceX rocket is being launched. I mean nothing is tailored into a specific market or even AI machine learning. Because at the end of the day AI and HPC uses linear algebra, you know, matrix multiplication, matrix vector operation, point-wise kernel. I mean the compute is the same. Nothing is aimed in, nothing in the  technology is blocking you from running any, any type of science or neural network. But eight years ago the world was different. I mean people forget how fast we have moved. Today architecture is transformers. That's the name of the big LLM. Whether it's Claude Opus or the new model or Groq, OpenAI with the codex in the agentic one or the GPT 5.4. So it's all about science. Eight years ago we were in a different space where the first AI accelerator were there, that's the one that started and the model were convolution, neural network or taking an image and classify it. Whether it's a dog or Chihuahua, cat or a dog. The reason why you start a company is because you geek out, like you want to build the technology. This is why you start a company. But at the end of the day every company needs to generate cash. So you can go today on machine learning eight years ago. But the market was undefined and Google had their own TPU back then. So it was very, it wasn't clear how you can generate money. Even now. But back then it was way more serious. And then we said okay, so we have novel compute architecture. Let's go to the basics. Every compute company, big one, started with to sell to the HPC market. Intel, AMD, Texas Instrument, even Nvidia. Yeah, you start with graphics but you know the big computation started with supercomputers. Build the supercomputers and we, Baruch Hashem [thank God], build a big supercomputer with the Department of Energy.

[Michael Eisenberg]

In the United States. Yeah. It's a joint venture that primarily built at Sandia National Laboratory but it's a collaboration of Lawrence Livermore and Los Alamos and Sandia. Yeah.

[Elad Raz]

In the United States. It's a joint venture that primarily built at Sandia National Laboratory but it's a collaboration of Lawrence Livermore and Los Alamos and Sandia.

[Michael Eisenberg — 17:18]

Yeah. So the big national lab supercomputer or one of them is built on NextSilicon now.

[Elad Raz — 17:23]

Yes.

[Michael Eisenberg — 17:23]

Okay. Yeah. And how powerful is it?

[Elad Raz — 17:27]

So I, I don't know when this episode is going to air but you know big news are coming soon. It's big. It's like in the top 500.

[Michael Eisenberg — 17:37]

Top 500 supercomputers in the world?

[Elad Raz — 17:39]

Yeah. Will be published but I'm not trying to reveal when what position

[Michael Eisenberg — 17:44]

And in terms of cost of compute, when you compare it to the other 500 in the world.

[Elad Raz — 17:48]

So you said 40x faster.

[Michael Eisenberg — 17:50]

40x faster per watt.

[Elad Raz — 17:52]

Yeah. And dollar.

[Michael Eisenberg — 17:55]

And dollar. Well that's a step function change. But we'll come back to this in a second. So you decided to go into HPC because everybody starts in HPC but Jensen and Nvidia have gotten out of the HPC business because the AI business, or let's call it the LLM business or or transformers, is much more lucrative right now. Why don't you do that?

[Elad Raz — 18:17]

We are.

[Michael Eisenberg — 18:18]

Oh, you are doing that.

[Elad Raz — 18:19]

Yeah.

[Michael Eisenberg — 18:19]

Is that easy? Just like you say. Okay, I'll turn off HPC and turn on transformers.

[Elad Raz — 18:25]

Yeah. Way more easy than we thought.

[Michael Eisenberg — 18:27]

Why is that? Let's back up one second. Okay, so let's make some order for our listeners here. There are GPUs which is the game Nvidia is playing. There's TPUs which is the game Google and to some extent Amazon is playing.

[Elad Raz — 18:43]

Let's take it in more of high level and understand what the user wants to do. And then on that we'll name the different accelerators because you have different workload. Some user want like edge computation saying here's an image, I want to classify it or I want to do voice to text. Okay. Kind of a wispr. Or diffusion models. Those models, transformer models are typically have hundred to billion parameters. Tiny models compared to the big LLM of a trillion parameter. And then the question is whether should I should I buy $10 million rack to accelerate those tiny models or I can go to hundreds of startup doing small cheap, low power, low cost to accelerate them. So this is one class of computing. And then you have, I think that the biggest range there are models that between 200 billion parameter to 2 trillion parameters. Because think about it, most users who adopt AI now are the early adopters of AI technologies, are software programmers with the Claude Code, and Codex, and even Gemini have a good kind of agentic solution for that. And they send a request, they want you to read the entire code base to get to millions of tokens. They want that their GPUs will crunch those models and speed up lots of token. That's the main and the prime use that we have today. And even agentic task of read all my contract and try to summarize that, you want a trillion thinking model. You want a big model that think that generate a lot of tokens, token is the little word, and read a lot of tokens. Those are the classical model. Between them, you have most of the hyperscaler and Google chasing after. TPU, the Nvidia Rubin, the new chip, or rubinultra.com, the Amazon Inferentia and Trainium. They're all around that section. And it used to be hey, there is like a classification between inference and decode. Today the nuance is more is it a pre-filled training chip or decode inference chip. And there is with the Groq acquisition there is another model of Cerebras.

[Michael Eisenberg]

One second because I think that's important. I think when Gavin was here, it was right after or right before Nvidia bought Groq, right? Explain why did Nvidia buy Groq, why that's important, number one. And number two, explain Cerebras has raised a lot of money recently. I mean, it's an incredible story, Cerebras. Right? It almost went public but there was customer concentration, maybe some other issues. They pulled back from the IPO. It's now a deca-billion dollar company – or 20 or 30 billion dollars, I think. And raising lots of money in the private market. So let's just try to take our listeners through what the differences are.

[Elad Raz — 21:53]

Sure. So let's classify them into these three categories. Training and inference that everyone spoke about it few years ago. People understand that there are few companies that train model and pre-tune a model. Meaning taking an existing model, getting your custom data and just tweak it. It's not something that costs a lot of money. You can do it anywhere.

[Michael Eisenberg — 22:14]

Who does that?

[Elad Raz]

The pre-tune?

[Michael Eisenberg — 22:17]

Yeah.

[Elad Raz — 22:17]

There are many companies that take open source model.

[Michael Eisenberg]

Right.

[Elad Raz — 22:20]

Whether it's the Chinese model, the Alibaba model or the Llama model and others that says okay, here is a model that is a base model. Base model is not the one that you ask him a question.

[Michael Eisenberg — 22:34]

What do I do the pre training on?

[Elad Raz]

Their proprietary data. That's open source.

[Michael Eisenberg — 22:38]

So that's the data side. What kind of compute do I do it on?

[Elad Raz]

So it depends on the volume but like single rack.

[Michael Eisenberg]

Basic compute.

[Elad Raz — 22:42]

Basic compute. Training model from scratch. It's super expensive and super big and there are few, few companies that are doing that. And then inference is a phase that, I mean GPUs is not one size fit or even within GPUs you have differences mostly around the memory and memory cost and capacity and bandwidth. When a user give you a query like, give you a paper in the LLM in the transformer, there are two phases. The first phase is called prefill. Let's take everything that the user give. I'm going to simplify the hell out of it. Let's do a big matrix multiplication. And here is the context. That's the prefill. It digesting the data into some kind of prompt. And decode, this is the phase that everyone loves to talk now especially to understand what Groq and Cerebras is, is the one that once you have the context generate a lot of small tokens. So the prefill phase is very similar to training because you are doing a big matrix multiplication. You don't need a lot of memory bandwidth. So you don't need expensive memory. That's like the digest of that. That's why you have chips that are dedicated for training or prefix. The workload is attributed by a big matrix and a vector. The problem with a big matrix and a vector, you are memory bound. You just stream the data from the metrics. You're doing the FMA, you're doing some point-wise kernel and reduction. That's it.

[Michael Eisenberg — 24:21]

Okay, I'll explain what you just said to everybody.

[Elad Raz — 24:24]

Memory bound means how fast. I mean everyone understand what token per second is how fast, you know, the screen is going to be. Okay, memory, different type of memory. You have SRAM, HBM. You know, SRAM is the one that Groq and Cerebras are all about, is how fast can I get the data from the memory or the context. The KV, the pre-fill phase, KV and I'm going to do the computation. You're not compute bound, you are memory bound. If you're using a cheap DDR, you're at around 1 terabyte of data movement. If you're using HBM, you're 30 terabytes, like 30x faster. So you can do more compute. If you are inside the SRAM, you can get to 150 terabytes way faster. Meaning you can speed the decode token faster.

[Michael Eisenberg — 25:11]

And that's why Cerebras is using those giant wafers also.

[Elad Raz — 25:14]

Yes, exactly. But the dumbest thing is, you know, no one calculate the cost.

[Michael Eisenberg — 25:20]

What does that mean?

[Elad Raz — 25:22]

So different memory has different type of cost. Okay. No one invent a new memory. The Cerebras didn't invent SRAM and neither Groq. I mean it's just existing memory that everyone use. We use SRAM. The question is how much memory you are using. HBM is expensive. HBM required advanced packaging. Meaning that in supply chain, when you generate the chip, when you generate the Maverick chip, our chip, your bottle neck...

[Michael Eisenberg — 25:49]

Your chip is called Maverick.

[Elad Raz — 25:50]

Exactly.

[Michael Eisenberg — 25:50]

Not Genesis, it's called Maverick.

[Elad Raz — 25:52]

Maverick. You have different type of memory and every memory has a different cost related to it. HBM are very expensive, but they have a medium capacity. DDR have low memory bandwidth, 1 terabyte relatively cheap, but you can get 2 terabyte of data. SRAM is hundred times less dense than that. Meaning they cost hundred times more because you need more chip to build them. Those are the three categories. Now you want, the question as you said is how do you measure your performance? Is it performance per watt? And you forget about the CapEx. You forget about the dollar amount that you put inside the chip. Just performance or performance per watt per dollar. Okay. CapEx is not free. You need to invest billions of dollars to build those data centers. That's why you see data centers that are still running Hopper. It's not that...

[Michael Eisenberg]

Hopper is a previous generation of chip.

[Elad Raz]

Yeah, exactly. I mean when Blackwell launched Jensen said, "Throw away the Hopper, buy Blackwell." CapEx is not zero cost.

[Michael Eisenberg — 27:05]

Well, it also turns out that these chips, because of that, it turns out that these chips have a longer life than anyone thought before. You know, they'll depreciate over I don't know, seven or eight years and not three or four years.

[Elad Raz — 27:15]

True. Yeah, but it's also related that in the decode phase it's all about the memory and memory technology didn't progress as

[Michael Eisenberg — 27:22]

fast as the chip.

[Elad Raz — 27:23]

So you can put way, way, way more compute. Jensen's Law is like 40x, you know, more compute. Right. Can you utilize the memory as well? Because that's the bottleneck. And then you need to build a big rack and connect them and doing collective to take the memory into one place. So the question is how you generate token per second. Token per second is the measurement that everyone loved. This is what started the LPU and Jonathan was the founder.

[Michael Eisenberg]

What is LPU?

[Elad Raz]

LPU is the name of Groq architecture, meaning there is a SRAM. You put the weights of the transformer there and you just trim the data and you can generate, you know, lots of token when you are SRAM. The problem is you need for a trillion parameter model. You need 2,200 of those chips. Super expensive. You need to normalize that.

[Michael Eisenberg — 28:17]

What does it mean to normalize that?

[Elad Raz — 28:19]

Normalize that is in the last GTC. I mean we are now in doing this recording after Passover in April. Eight months ago there was the GTC when Jensen spoke about the Groq acquisition and

he had like a nice chart saying we Nvidia Rubin, that's the name of the Nvidia GPU, are in the mid range, you know, until 2 trillion parameter model with half a half a million context window. But if you want faster token per second, here is a joint effort between rec of Rubin and rack of Groq LPU that combine them together and get 15 times more token per second. Great. How much it cost? 50 times more. Instead of $6 per 1 million token, 9 times more expensive, it's going to cost you $45 per million token. Normalize mean will user are going to pay the price to run those models faster, you get two times more token per second at a cost of nine times more dollar. That's the big question around the Groq, Cerebras. I mean will the price make sense for user to pay this extra price for slightly faster models? That's the big question around the SRAM based.

[Michael Eisenberg — 29:43]

Okay, so said simply, and then we'll come back to NextSilicon. Said simply, everything is measured right now in AI land, not an HPC land, but AI land. It's measured in cost per token. The new architectures which combined the Nvidia GPUs alongside either Groq or Cerebras to increase memory footprint, we'll call it, memory capacity has increased the cost. It's doubled the speed but increased the cost of producing a token significantly. What are the applications that people are willing to pay that for?

[Elad Raz]

Yeah.

[Michael Eisenberg]

Okay. Enter NextSilicon now.

[Elad Raz — 30:22]

Enter NextSilicon. Yeah, so in NextSilicon we obviously can play around the entire range. We have decided to focus.

[Michael Eisenberg — 30:33]

What does that mean you can play around the entire range? What does that even mean? Nobody can do everything.

[Elad Raz — 30:35]

Yeah, absolutely. You have, you need different products to different market. You need the SRAM base, you need HBM based, you need LPDDR for the pre-fill or training. That's like different type of product. The core technology itself is how to run compute faster. And then the question that you need to ask me. Well Elad, who cares? I mean if you said, you just told the entire audience that AI is memory bound problem. So take a memory, maximize it. That's the result that you can get. How can you run it faster?

[Michael Eisenberg]

Right. There goes that Amdahl problem again.

[Elad Raz — 31:18]

Exactly. Yeah but this is like short memories of let's design a chip that looks on past performance and not future. What does it mean to program a chip to the future? So a, understand that or the audience – you need to. You are an investor at NextSilicon. You know how painful it is to build a semiconductor company. Just a perspective, $150 million for every ASIC design. Meaning that the company burns $150 million and then you get a first sample from TSMC, from the foundry. There can be mistakes and mistakes are expensive and then you need to go to the drawing board and do another revision. It's like super equity capital expensive. It's not easy. I'm never going to build a semiconductor company again.

[Michael Eisenberg — 32:10]

Because Jensen's still doing what he's doing 30 years. I expect that you'll be in a rocking chair before you get another chance.

[Elad Raz — 32:17]

Oh probably. Okay. So that, that's expensive. So when you design the chip, you need to design a chip for three years into the future rather than three years in the past. Today architecture are transformers. Transformers are bottle neck by HBM memory or SRAM memory. But algorithm change quickly. Do you remember in the beginning we said well first AI model came it was convolution, neural network or image classification until the chatgpt moment that people say this thing works and we can, you know, unlock the imagination. And then a lot of people spoke about AGI and other... at the end of the day, whether it's Yann LeCun AGI, this doesn't matter. Algorithm change. With algorithmic change you can get way, way better performance than the actual chip. At NextSilicon we don't care what workload is running, whether it's AI decode or HPC code. Why this thing is so powerful, because future models 28, 29 probably not going to be the transformers we know today. Even now there are papers at Google and even Nvidia that says let's compress the memory and, because we are memory capped, so let's compress the memory, get to the computational core, decompress that KV cache and then we can do more work.

[Michael Eisenberg — 33:49]

Okay. Said simply – what you're betting on is the following. Because I am the fastest, bestest, cheapest compute processor on planet earth. Are you by the way?

[Elad Raz — 34:02]

Yes.

[Michael Eisenberg — 34:04]

Okay. And everybody else is memory balance. Why don't we you know, tighten the pipes a little bit or you know, compress that so we can jam more memory into the system, push it through NextSilicon, do some magic on it and basically take advantage of the increased compute and not be memory bound anymore.

[Elad Raz]

That's an example that we are not going to compress it. The users are doing those research now and show 8x better performance just by memory compression.

[Michael Eisenberg]

Even without that?

[Elad Raz — 34:22]

No, no. even without NextSilicon, on a GPU because it's easier to program. I mean building an AI accelerator just to run transformers is the dumbest idea ever because algorithm change. I'm betting that three or four years from now you will see more complex algorithm that accelerate the computation or demand 1000 times less computation and memory bandwidth that we can run those models 1000 times faster. Now this 1000x number is important. Why? Because your brain operate at 20 watt. Those big racks are, or GPUs, are in 22 kilowatt per operation and same operation cost 1000 times when you run them on GPU. That's not going to be solved by getting to 0.01 nanometer on the transistor.

[Michael Eisenberg — 35:25]

Your brain operates at 20 watts. Mine probably operates at less than that.

[Elad Raz — 35:28]

I'm not sure about that.

[Michael Eisenberg — 35:29]

Let's just be very clear about the bet. The bet is the world is going to need dramatically faster compute. The bottleneck problems, let's call it, which is memory.

[Elad Raz]

For now.

[Michael Eisenberg]

For now, is going to get solved or get solved to be able to take advantage of the compute. But right now you don't provide that much more value because of the memory problem.

[Elad Raz — 35:54]

Yeah, but we have our own tricks over there and we are going to reveal them soon.

[Michael Eisenberg — 36:02]

Ooh, want to reveal them now?

[Elad Raz]

No.

[Michael Eisenberg — 36:05]

When's soon?

[Elad Raz]

Your partner is going to shoot me.

[Michael Eisenberg — 36:07]

Okay, now when Nvidia or Amazon or Google take a look at you, right? These are people, these are companies producing GPUs and TPUs that a priori are less fast, less accelerated than your computer. What's their answer? What do they say? By the way, I should point out, I pointed this out in a tweet recently, that if you look at kind of the unlock and I asked Grok, by the way, Grok with a K, not with a Q. Grok with a K, which is the X, you know, kind of bot there. SuperGrok. The question I said, okay, what has enabled Amazon's, what is the original technology that enabled Amazon's Trainium and Nvidia's scale up of their compute? And the answer by the way, is Israel in both cases because I think it was Annapurna, I can't remember Savannah or Annapurna sold to Amazon, became the basis for Trainium and Mellanox of course, which is the interconnect which has enabled Nvidia. What?

[Elad Raz — 37:12]

That's the NV link. I mean today you talk about system.

[Michael Eisenberg — 37:17]

Yeah.

[Elad Raz — 37:17]

Because you can take a model and shard it across many GPUs. The bottleneck is how fast you can connect those different GPUs.

[Michael Eisenberg — 37:24]

Yeah, that comes from Mellanox, of course, by Eyal Waldman, and he's been on this podcast before. So if I'm, if I'm Google and Amazon and Nvidia and I'm looking at NextSilicon, I go, nah. Why? What do they, what would they say to say, okay, this is not real or not important?

[Elad Raz — 37:42]

Right now. The demos that we have made was around the HPC.

[Michael Eisenberg]

Not the demos, you have actual customers.

[Elad Raz — 37:49]

The customers. I mean...

[Michael Eisenberg]

The use cases are around HPC.

[Elad Raz]

Exactly.

[Michael Eisenberg]

HPC, in case I didn't say it earlier, is high performance computing. But go ahead.

[Elad Raz — 37:56]

Right now we are taking scientifical code in the HPC. That's what we are running and winning. The core technology is so much different. It's not a feature. NextSilicon is not a feature. It's not, okay, let's take the GPUs and connect NextSilicon into that. It's different architecture, it's a replacement. This is where we are. And as every novel technology, GPU included, at the beginning, most people don't understand the uniqueness around what we have built. The way for us to compete with that is just execute.

[Michael Eisenberg — 38:36]

If you had to use historical analogies, you're more CPU than GPU, is that fair to say?

[Elad Raz — 38:42]

No, it's not CPU and GPU. It's not an FPGA. It's something different. But let me take an example. When Intel first heard about the GPUs, they said, who cares about the GPU? Okay, we have a CPU. This is the way the people see us. They say, okay, they have an interesting technology, they are a technology and not, obviously we are not a market leader in any category, but this is where we are now.

[Michael Eisenberg — 39:09]

And how big is your penetration into the HPC world?

[Elad Raz — 39:13]

We are in every Department of Energy, Three Letter Agency. I mean we are there.

[Michael Eisenberg — 39:21]

You're there. And now that Jensen left the HPC market, has that opened up more opportunity for you?

[Elad Raz — 39:27]

Absolutely.

[Michael Eisenberg — 39:28]

Why did he leave it?

[Elad Raz]

Good question. I think that you have limited resources inside your silicon, and there's fixed amount of transistors. It used to be 100 billion transistors. Now it's 200 billion transistors on the Blackwell chip. Should I allocate more resources for the HPC or should I allocate more resources for the AI? In HPC computers are built around 10 to 100 million dollar. The big leadership computers AI gigawatt factories start with a billion dollar 10 of billions of dollar per gigawatt.

[Michael Eisenberg — 40:13]

So you're still playing in the, in the little leagues there.

[Elad Raz — 40:16]

Yes. Okay, I'm in the little leagues.

[Michael Eisenberg — 40:17]

Like one of the big advantages that Nvidia has is CUDA, right? Which is the software layer that's created a lot of lock in. How are you going to break into that?

[Elad Raz — 40:30]

You just run CUDA.

[Michael Eisenberg — 40:31]

You're going to run CUDA?

[Elad Raz — 40:32]

We are running every program language that the user wants us to run. We can run it.

[Michael Eisenberg]

Why?

[Elad Raz]

The way that NextSilicon operate is not by programmable language. We don't really care how the user wrote the code. We get the code. We understand the different computational and at runtime, once we understand how it works, we reconfigure the chip, we place the different transistor...

[Michael Eisenberg]

You actually reconfigure the chip?

[Elad Raz]

Exactly. And whether the input is a C code or C++ for the HPC or CUDA or Triton to write a new envelope or, we just execute that.

[Michael Eisenberg — 41:13]

One of the things that Gavin said when he was on the podcast, he said that chip companies take three generations to kind of get your act together and find the chip that works in the market. You're Gen 2 now, right?

[Elad Raz — 41:26]

Correct.

[Michael Eisenberg]

Do you agree with him?

[Michael Eisenberg]

In general terms? Yes. Because even now, as you said, we are in the little league. It just take time for companies to generate market fit.

[Michael Eisenberg — 41:39]

It's three generations?

[Elad Raz — 41:41]

It takes time. Why? Because it take you like three to four years until you get the first chip from TSMC. So that's already four years. First generation, at almost every company I know, it's not exactly the same as the customer meets. You need another generation. By the time you start selling your...  we're selling the Maverick 2, you're already in your third generation in terms of lifetime. So yes, I agree with that.

[Michael Eisenberg — 42:12]

When does your third generation tape out? Okay, let me ask you another question. I Want to get to your personal background one second. So you served in the 8200 unit?

[Elad Raz — 42:25]

Correct.

[Michael Eisenberg — 42:25]

Okay, that's well known for cyber, not for chip or chip design. What happened to you?

[Elad Raz]

Personally, I think that I mean it's incredible that the State of Israel educate young entrepreneur, young young talented pre-school to go after cyber. Okay. So, I was in the first generation of the cyber but I love to create. I love to write code. I love compute. This is what I've done and after my military service I decided that well I've done it for too much time. I was really good at it but it's time let's build a company. I didn't know what I want to do I started a service company around high performance computing, high performance networking. I got a lot of interesting project I think that during that lifetime of the company integrity project, you and I met for the first time. This company was later got acquired by Eyal Waldman in Mellanox and then I got to see the big supercomputers and I just fell in love with that and I say okay this is what I want to do and I started NextSilicon.

[Michael Eisenberg — 43:46]

Just like that?

[Elad Raz — 43:47]

Exactly like that.

[Michael Eisenberg — 43:48]

Where did you recruit your team from?

[Elad Raz — 43:51]

All over.

[Michael Eisenberg — 43:52]

How many people work at NextSilicon now?

[Elad Raz — 43:54]

At the moment, around 400.

[Michael Eisenberg — 43:56]

400. And where did you recruit the technical talent from?

[Elad Raz — 43:59]

Yeah, so the majority of the company is R&D, engineering. Divided a bit more on the software side but let's call it half and half between hardware and software. So I was, I'm the software guy, this is what I've done. So the team of the software is mostly coming from intelligence company or other companies I had relationship with. I was able to get Eyal Nagar, my co founder, he was in a lot of startups, did all the tape out in his life. I mean I never built a chip, and he was able to get the best talent from all over Israel, from Apple and Google and a bit from Mellanox. I mean all over.

[Michael Eisenberg — 44:51]

I want to talk about for a second this semiconductor industry. So you mentioned TSMC and you mentioned it takes three years or four years to get a chip out. Does it need to be this slow or is it just the structure of the industry where TSMC is essentially a monopoly in chip tape out any manufacturing basically. Couldn't we accelerate this industry in any way or not really?

[Elad Raz — 45:13]

Oh yeah, of course, of course, but not in our lifetime. And what do I mean by that? You know, when I came and started NextSilicon as a software engineer, I said what's the problem? I mean hardware is like a software. Let's like include the libraries and accelerate things and you know, the engineers in the hardware told me Elad, you don't get it, it's different. And there are many discipline around hardware design. And for the audience, I'm going to divide my talk. There is fabless companies like NextSilicon that design a chip and the end result of them is a graphic file that you send to TSMC for manufacturers. This is like how to print the chip. And there is the actual chip that has own their own fabs and so many, many different vendors around two aspects. So let's talk around the design phase. So you have the design, you have verification, you have dark magic of analog, how to lay down a transistor that send data in and out and then you have the place and route – how you take the design of the chip, press a button and then you get the layout of the chip. Those are like different aspects. And I said what's the problem? Let's write a Python script that does everything with Claude Code. But the amount of IP and complexity in that industry which is dominated by few companies, you have Synopsys, Cadence, and Mentor doing EDA tools. I mean how to take a code and get it into the end design for manufacturing, Ansys that got acquired and others. And the problem there is the cost of a mistake. Cost of a mistake means that your chip will not work. Chip will not work is another $150 million or $50 million if you find a quick fix. No one wants to take that risk. So everyone wants to sign off their tool that everything works and there is simply not enough money there to innovate.

[Michael Eisenberg — 47:31]

Like this whole AI race and the chips race in general. The United States introduced the Chips Act. It's no secret that the tension between the US and China around semiconductors is very large. Every decision at the end of the day at TSMC to supply you with a taped out chip is like a geopolitical decision. So...

[Elad Raz]

Absolutely.

[Michael Eisenberg]

If you were writing policy for the State of Israel right now or had $10 billion and a magic wand, what would you create to ensure that we're able to design, develop, tape out, and produce semiconductors going forward?

[Elad Raz — 48:13]

So let's divide it into manufacturing and design. Manufacturing, in the State of Israel,  how much that I'm Zionist and I love my state, going to be impossible to build. We are not talking about 10 billion, we are talking about $30 billion if you want to be in the leading edge, and this is if you know exactly what you are doing. It's a know how that Intel, TSMC, and Samsung have mastered over the past two decades and China are trying to beat them and they are like just two generation in between.

[Michael Eisenberg — 48:52]

We have Intel fabs here.

[Elad Raz — 48:54]

You have good people, you have Intel fab. You don't have Intel technology.

[Michael Eisenberg — 48:58]

Right, that's true.

[Elad Raz — 49:00]

So in order to rebuild it, you still need this decade of development. By that time TSMC is going to raise. So you talk about more about $100 billion to have to build your own technology from scratch. In the design phase, absolutely. I mean we are not doing anything around accelerating or de-risking there. Most of the multinational that design a chip are US based. Many Israelis going to the military service and then going to build their own startup. Because that's the ecosystem, that's where the investors are. Even for us NextSilicon, fundraising at the beginning was super challenging because you come to investors that understand cyber and ARR and get like a fast money. A billionaire a chip is, you need patience, you need time, different type of stories. And we need. We need to build it, we have to build it. It's important. I think that the design phase is more important than manufacturing because that's your power.

[Michael Eisenberg]

Can you disrupt the Synopsis? Is it disruptable?

[Elad Raz — 50:12]

I think that for the EDA tool it is. But the market is not as big as everyone hoped for that worth the investment.

[Michael Eisenberg — 50:21]

So going back to something about Nvidia. You said in 2024 that Blackwell, the Blackwell announcement was the best thing that ever happened to NextSilicon. Why did you say that? What does that mean to you?

[Elad Raz — 50:34]

I said that on Blackwell and I said that also around Rubin.

[Michael Eisenberg — 50:41]

Yeah. You said that on Ruben too. Yes, that's true.

[Elad Raz — 50:42]

Yeah. We're scared shitless of every company that develop AI chip and you never know how they are going to progress. And I think that the best thing that happened to us with Hopper and Blackwell. Both of them were manufactured and designed with 4 nanometer and we look at the Blackwell specification, remove the sparsity, remove, you know, the other nuance that pretty cheap. But you take the core compute of Blackwell, core compute of Hopper, you compare them one by one. You see Blackwell is 2x than Hopper. Great. But it also consumed 2x more power. Nothing in the architecture had been improved between Hopper and Blackwell. NVLink for sure. You know other nuances, but that's like their core. Hopper to Rubin almost 2x more power, 2x more performance. This is...

[Michael Eisenberg]

Linear scaling.

[Elad Raz]

Linear scaling. At the end of the day...

[Michael Eisenberg — 51:42]

You want to say that you're exponential in the power to compute.

[Elad Raz — 51:44]

Between generation, way more than that. And we are just in the beginning of our design. We have so many places to improve efficiency that we have because we spend $100 million on a chip and not $1 billion on a chip.

[Michael Eisenberg]

What does that mean?

[Michael Eisenberg]

In terms of engineering.

[Michael Eisenberg — 52:03]

You spend 100 million, they spend a billion. So that's 10x. But how does that impact exponentially?

[Elad Raz — 52:06]

If you work on the architecture better on the transistor layout, you can squeeze more performance by those changes. Not 10x but 2x.

[Michael Eisenberg — 52:15]

Are you arguing that your capital efficiency has actually caused you to be more innovative?

[Elad Raz — 52:19]

Of course. Have to be. To start with, you know every company has its own risk. It's a roller coaster. Ups and down. In semiconductor is way worse, these ups and downs you need a lot of help from Hashem [God]. You know that miracle. Hashem is God Almighty. That's what you need in order to succeed in the semiconductor. And yeah, you have limited resources. You have to innovate.

[Michael Eisenberg — 52:50]

Interesting. So a couple of closing questions. Since the advent of AI, Claude Code, what changed at NextSilicon?

[Elad Raz — 52:57]

Everything in every company, across the board. I mean people don't understand what is coming and it's going to be a fun roller coaster. I think that in NextSilicon we have 400 talented people. With those AI tools even I mean the hardware, you remember that I said it's the most risk-averse kind of nuance. Even there, they are open for those type of innovation. And I'm literally see people getting 10x more productivity using those tools.

[Michael Eisenberg — 53:32]

Have you mandated that everyone needs to use...

[Elad Raz]

Absolutely.

[Michael Eisenberg]

AI code?

[Elad Raz — 53:37]

Of course. I mean a company that don't have a leaderboard scores of how many tokens is being used is...

[Michael Eisenberg]

The more the better.

[Elad Raz]

Of course.

[Michael Eisenberg — 53:46]

How big is your token budget now?

[Elad Raz — 53:50]

Depends. Like we are using everything. Me personally, I'm using Gemini, Codex, and Claude. We have people that generate $300 a day, $100 a day per developer. But I think it's mostly, we are now in April 26th, just in timeline. You know, those type of questions get a tendency not to age well. We are now in a phase that training for employees is the most important thing. Like personal example. So I find myself walking around and I am seeing a developer and like he run an agentic task and I ask him okay, how many agents have you opened? He answered only one. I said let's open another one and work on a different problem in parallel.

[Michael Eisenberg — 55:19]

And how does it really help even? Because it takes two years to tape out a chip anyway, two three years to tape out a chip anyway. So you're running faster on the software side of it but you can't move the main you can't move the atoms.

[Elad Raz]

So software is a no-brainer because everyone understand. Let's talk about the design. In design and verification, specifically in verification, you write a lot of tests in order to find hardware bugs. So you manually need to skip them. To start with all the basic test, you're doing that in a second. But an agentic test on all your hardware design until you find some bugs.  All of those those are applicable to hardware as well.

[Michael Eisenberg]

And how much does that, if you have a call it a two-year cycle from beginning to end, how much would that take out?

[Elad Raz — 55:31]

Not much, because you have nine months in manufacture. But within the one year of design you can just do more.

[Michael Eisenberg — 55:39]

Got it. You can do more?

[Elad Raz — 55:41]

More features, better quality, good frequency and power consumption.

[Michael Eisenberg — 55:46]

So the jump from generation to generation should be bigger.

[Elad Raz]

Yes.

[Michael Eisenberg]

But you can't dramatically shorten the time from, call it beginning of design to tape out.

[Elad Raz — 55:55]

I don't think so.

[Michael Eisenberg — 55:56]

Maybe a month here to max. Nothing dramatic. That thing is interesting by the way, to think about the world of atoms has its own pace. The world of code has a entirely different pace these days. And the hardware can bear what the hardware can bear over time. So you mentioned that you go around and you're kind of setting an example and asking, you know, you did one agent. Let me help you do another agent. Apparently, you still code in assembly on a Commodore 64. I think that might have been my first computer. Commodore 64. It had the cartridges in the back. They still have that? The one you have?

[Elad Raz — 56:34]

Yes, yes, yes.

[Michael Eisenberg — 56:35]

You still have a Commodore 64?

[Elad Raz]

I have Commodore. I don't have the cartridges, I have for the Atari one. For the Commodore I have the big disc, you know, the floppy disc.

[Michael Eisenberg]

Oh okay.

[Elad Raz]

Yeah, that's the Commodore disk that I have. But yeah, I love all computers.

[Michael Eisenberg — 56:52]

You actually code on that thing?

[Elad Raz — 56:54]

Yeah, yeah.

[Michael Eisenberg — 56:56]

You don't have a VGA monitor still?

[Elad Raz — 56:58]

Yeah, I'm a bit lazy so I'm doing everything in emulator before moving it to the actual hardware. But, yes.

[Michael Eisenberg — 57:04]

Two last questions for you. We're living like in this moment of history right now where a chip company, Nvidia in this case, was like the most valuable business in human history. What do you think that means? Not just financially, but philosophically about the world where this notion of silicon can become the most valuable thing in human civilization.

[Elad Raz — 57:30]

No surprise there. Think about it for a second. When 1000 years ago we all were fighting around food. Okay. That's the basic element that everyone needs. Food, water, resources. I think that around 300, 200 years ago it was around energy resources. That's most of the worlds. Now we are in the age of compute. Those are the phases better, more energy you can produce will be utilized by compute. Compute is the wave.

[Michael Eisenberg — 58:03]

So when they call it a wafer, it's not just silicon, it's like a food reference that everyone.

[Elad Raz — 58:09]

I never thought about that.

[Michael Eisenberg — 58:11]

Could be a religious reference too. Christianity. And to finish, I started by asking you what was etched on the first NextSilicon chip and you said it was Bereshit, or Genesis, and on your most recent chip?

[Elad Raz — 58:27]

So Maverick 2, we haven't found a better word than Bereshit to place on the Maverick 2 for a long time. Like I ask everyone and you know, people gave us some kind of different references but nothing that was more powerful. And I told the engineer, just keep the word Bereshit because it's like it's Bracha, just keep it.

[Michael Eisenberg — 58:52]

Bracha means blessing.

[Elad Raz — 58:53]

Yeah. And then Maverick 2, sorry. And then unfortunately October 7th came and that was like few weeks before the tape out. I just told everyone a day after it was like Saturday and Sunday I sent the email to TSMC, said well I want to put Am Yisrael Chai, people of Israel live. And I had to explain them why it's non politicized and you know why. And I told them I'm not going to tape out unless you put it. That's like Israeli resilience and let's etch it. And that's, that's what exists on the Maverick 2. I'm

[Michael Eisenberg — 59:36]

Am Yisrael Chai on the Maverick 2.

[Elad Raz]

Yeah.

[Michael Eisenberg]

It's inspiring.

[Elad Raz — 59:39]

Thank you.

[Michael Eisenberg]

And thoughtful. And that's what's running the computing and high performance computing all over the place.

[Elad Raz — 59:45]

Exactly.

[Michael Eisenberg]

Much more than resilient, it's powerful.

[Michael Eisenberg — 59:49]

Elad, Thank you for joining us.

[Elad Raz — 59:51]

Thank you.

[Michael Eisenberg — 59:52]

Thank you for joining us on Invested. Thank you to Elad. If you enjoyed this podcast, please rate us five five stars on Spotify, Apple Podcasts. Please please please subscribe to the YouTube channel and if you're enjoying the content, be sure to share with your friends as well.

60 seconds with
Elad Raz
Show References

Follow Elad on Linkedin

Learn more about NextSilicon

Subscribe to Invested

Learn more about Aleph

Subscribe to our YouTube channel

Follow Michael on Twitter

Follow Michael on LinkedIn

Follow Aleph on Twitter

‍Follow Aleph on LinkedIn

‍Follow Aleph on Instagram

Credits

Executive Producer: Erica Marom 

Producer: Sofi Levak, Myron Shneider, Dalit Merenfeld

Video and Editing: Nadav Elovic 

Music and Creative Direction: Uri Ar 

Content and Editorial: Jackie Goldberg

Design: Nimrod Sapir

60 seconds with
Elad Raz
Show References

Follow Elad on Linkedin

Learn more about NextSilicon

Subscribe to Invested

Learn more about Aleph

Subscribe to our YouTube channel

Follow Michael on Twitter

Follow Michael on LinkedIn

Follow Aleph on Twitter

‍Follow Aleph on LinkedIn

‍Follow Aleph on Instagram

Credits

Executive Producer: Erica Marom 

Producer: Sofi Levak, Myron Shneider, Dalit Merenfeld

Video and Editing: Nadav Elovic 

Music and Creative Direction: Uri Ar 

Content and Editorial: Jackie Goldberg

Design: Nimrod Sapir

60 seconds with
Elad Raz
Show References

Follow Elad on Linkedin

Learn more about NextSilicon

Subscribe to Invested

Learn more about Aleph

Subscribe to our YouTube channel

Follow Michael on Twitter

Follow Michael on LinkedIn

Follow Aleph on Twitter

‍Follow Aleph on LinkedIn

‍Follow Aleph on Instagram

Credits

Executive Producer: Erica Marom 

Producer: Sofi Levak, Myron Shneider, Dalit Merenfeld

Video and Editing: Nadav Elovic 

Music and Creative Direction: Uri Ar 

Content and Editorial: Jackie Goldberg

Design: Nimrod Sapir

Latest Episodes

Amir Fischer, Teen Venture Capitalist, on Raising $2M, Cold-Emailing Billionaires, and Why His Generation May Have the Most Agency Ever

February 14, 2024

Dr. Eli David, Top AI Researcher and Entrepreneur, on Free Speech, Woke AI, and Why We’re so Much Smarter Than Elephants

February 14, 2024

Ami Daniel, Co-Founder and CEO of Windward, on Global Supply Chain Disruptions, How Much Ransom Costs to Cross the Strait of Hormuz, Global Navies’ Capabilities, and How Long This Whole Thing Will Last

February 14, 2024

Israel’s Resilience, Innovation, and the Future of Techno-Geopolitics | An Invested Special Episode

February 14, 2024

Zach Greenberger & Eran Shir on Nexar’s Dominating the AV Market, Real-World Data Moats, & the Playbook for Building an Agentic Company

February 14, 2024

SpotitEarly CEO Shlomi Madar on How Their Dogs Can Sniff Early-Stage Cancer, Why Dogs + AI is a Critical Combination, and Whether Cancer Screening Should be Regulated

February 14, 2024

Lightspark Co-Founder David Marcus on Bitcoin Hitting 7 Figures, Lessons from Building and Losing Libra (Diem), Why College is Cancer & Fighting for Existential Values

February 14, 2024

Special Episode: A Lookback on Some of the Strongest Business Leaders of 2025

February 14, 2024

David Magerman on Building Renaissance Technologies, Why AI is “An Automatic Machine Gun We’re Giving Children,” and Sacrificing Relationships for Success

February 14, 2024

Founding Partner of Uncork Jeff Clavier on Being the First Seed Investor, When Your First LP is Your Wife, Passing on Uber & LinkedIn and the ‘3 Asses’ Rule

February 14, 2024

USVP GP Jacques Benkoski on What Entrepreneurs Get Wrong About Go to Market, Building Value as an Investor & Why ‘Crossing the Chasm’ is Outdated

February 14, 2024

Carta CEO Henry Ward on the Death of CartaX, a Controversial AI Take, Only Hiring Missionaries & the EQ Mistakes That Break Companies

February 14, 2024

Nuseir Yassin, AKA Nas Daily, on Being the First Israeli-Arab to Build a Unicorn, the Death of Organic Content & the New Marketing Playbook, and the Cost of Advocating for Peace Between Jews and Muslims

February 14, 2024

Gavin Baker, Managing Partner of Atreides, on Everything You Wanted to Know About Semiconductors & Why Global Warming is a Solved Problem

February 14, 2024

General Catalyst Co-Founder David Fialkow on Pitching VCs Through ‘Hot Buttons,’ Billion-Dollar Storytelling & Bringing Down a Cult

February 14, 2024

Mati Gill, CEO of AION Labs, on Funding Life-Saving Startups, Lessons From Working at Teva Pharmaceuticals in Crisis & Life After Getting Shot

February 14, 2024

Nir Zohar on His Evolution from Wix Coffee Maker to COO, Why the Wix Management Team Has Offsites in the Water, the Super Bowl Ad that Changed Everything and Why Wix Bought Base44–”One Guy”–for $80 Million

February 14, 2024

Gigi Levy-Weiss on How the Israeli Air Force is the Best Model for Running a Company, How Playtika Exploded Overnight by Accident, the Uncomfortable Truths About Working in Gambling, and the Coolest Companies in the NFX Portfolio

February 14, 2024

Fiverr CEO Micha Kaufman on How a Story in CNBC Beat Fiverr’s $8M Super Bowl Ad, the Right Way to Go Public, How Your Boss isn’t Responsible for Your Career, Why Freelancers Move Faster Than Employees, and How AI is Raising the Floor–not the Ceiling

February 14, 2024

From DNA to Star Trek: Samuel Arbesman on Reconciling Science with Tradition, Building Personalized “Sims” for Your Body, and What it Means to be a Scientist in VC

February 14, 2024

Dream Founders Shalev Hulio and Sebastian Kurz on How Dream Became the World’s Fastest-Growing Cyber Startup Within 18 Months, Building the First AI Native Cybersecurity Company, and Whether it’s Possible to Ethically Use Offensive Cyber

February 14, 2024

Israel <> Iran War: Special Episode of Invested

February 14, 2024

Ex-Fiverr Chief Business Officer Gali Arnon on the Strategy That Turned Fiverr From the Cheap Services Brand into the World’s Top Talent Marketplace, How the Role of the CMO Has Changed, and Making Your Mother-in-Law Proud

February 14, 2024

StarkWare Co-Founder Eli Ben-Sasson on the Future of Blockchain, Zero Knowledge (ZK) Proofs and How They “Solve Integrity” and Why We No Longer Have to Trust the Banks or Government

February 14, 2024

Katie Stanton on What You Never Heard About Obama, Operating Vs. Investing, Behind the Scenes at Twitter, Yahoo! and Google, and Using Your Network to Get Ahead

February 14, 2024

Benchmark GP Sarah Tavel on What She Learned at Bessemer, Greylock, Pinterest and Benchmark, Being an Operator vs. an Investor, and How to Create an AI Moat

February 14, 2024

Omri Casspi on Life After the NBA as a Venture Capitalist, Bringing Elon Musk to Israel, Sports Philosophy in Investing, Getting Founders to Trust Him, and the Story He Never Told Anyone | Invested

February 14, 2024

Amir Shevat on Growing Developer Relations at Google, Microsoft, Slack, Twitch and Twitter; the Future of Engineering; Getting Fired Overnight by Elon Musk & Publicly Calling Him Out

February 14, 2024

Ex-Amazon Dan Davidi on Replacing Fuel with His Company Ohr, Literally Reinventing Rocket Science, and Whether Synthetic Biology is Playing God

February 14, 2024

Ex-CEO of Waze Noam Bardin on What Really Happens at Google, Life After a Billion-Dollar Acquisition, Building the Ultimate Community-Based Business, and What No One Gets About OKRs

February 14, 2024

Investor Sender Cohen on Learnings From Working With Stan Druckenmiller and George Soros, TLV vs. Dubai, and Why He Pays Someone to Keep Him Off the Internet

February 14, 2024

Andreessen GP Katherine Boyle on the Battle for America’s Future, Why She Left Silicon Valley for Florida & Why it’s Good to be Bored in Church | Invested

February 14, 2024

Special Episode: Why TikTok Should be Banned in the U.S.

February 14, 2024

The Information Founder Jessica Lessin on Building the World’s Top Tech Media Outlet, Why Citizen Journalism Won’t Replace Traditional Media, and the Future of Journalism | Jessica Lessin on Invested

February 14, 2024

Vee Founder May Piamenta on Selling Her First Company at 16, Smuggling a Computer Onto an Army Base, Laying Off Her Entire Company and Knowing When to Pivot

February 14, 2024

Adam Fisher, Partner at Bessemer Venture Partners, on What it Takes to be a Great VC, Lessons from Wix and Fiverr, and Why Israel Should Let Go of the “Startup Nation” Narrative

February 14, 2024

Harvard Professor Roland Fryer on Relentlessly Pursuing Truth, ‘Closing Social Gaps with Market Caps,’ and Celebrating Failure

February 14, 2024

Harry Stebbings on Starting 20VC, Being Underestimated, Investing, and Wealth vs. Happiness

February 14, 2024

Yasmin Lukatz on Investing in a Time of War, Founding ICON, How to Make a Difference, and Being on Shark Tank

February 14, 2024

Yonatan Adiri on Stepping Down as CEO of Healthy.io, the Future of Healthcare, and What’s Next for Him - Part Two

February 14, 2024

AI Expert Oren Etzioni on How Deepfakes are Influencing Global Events, Our Moral Obligations Around AI & the Fight Against Disinformation | Invested

February 14, 2024

Yonatan Adiri on Founding Healthy.io, Challenges in HealthTech, and Working with Shimon Peres - Part One

February 14, 2024

Izhar Shay on Losing His Son Yaron Oree, the October 7th Attack, an Initiative for the Fallen, and Moving Forward

February 14, 2024

Yoav Shoham on What Machines Understand About Us, Leading AI in Israel & Stanford, AI21 Labs Innovations, and How We’re Wrong About AGI

February 14, 2024

Sam Lessin on What Entrepreneurs Don’t Realize About VCs, When the State Should Intervene with Tech, the Future of Crypto and the Evolution of Truth

February 14, 2024

Ami Daniel on the October 7th Massacre, Rescuing Survivors Using Tech, and Managing a Company During War

February 14, 2024

Matan Bar on Melio’s Humble Beginnings, The Power of Storytelling, Making Tough Decisions as a CEO, and Meeting George W. Bush

February 14, 2024

Marc Rowan on Having the Courage to Speak Up, Apollo’s Investing Strategy, Commitment to Israel, and Looking Ahead

February 14, 2024

Palantir Co-Founder Joe Lonsdale on Fixing America, Disrupting Defense Innovation & Building the University of Austin

February 14, 2024

Lior Eshkol on How Wolt Israel Became a Cultural Icon, the Essence of the Israeli Consumer & Navigating the Company Through Crisis

February 14, 2024

Jacob Helberg on the China Doomsday Scenario, the Dangers of Wokeism, and What America Needs to Win the Technological Arms Race

February 14, 2024

Eyal Waldman on Building Mellanox, Employing Palestinians and Israelis Together, and The Future of AI

February 14, 2024

Eyal Waldman on What Went Wrong on October 7th, His Personal Loss and Plan for Peace, and Mellanox’s Impact on AI

February 14, 2024

Jon Pelson on China's Influence, TikTok, Huawei and Different Values

February 14, 2024

Kathryn Mayne on VC, Investing in People, Taking Risks and Why Israel is Special

February 14, 2024

Daniel Schreiber on Storytelling, Good vs. Bad Investors, Running a Public Company

February 14, 2024

Russ Roberts on the Difficulty of Giving Good Advice, Work-Life Balance, Our Obsession with Productivity, Storytelling and Wild Problems

February 14, 2024

Jeff Swartz on Timberland, Social Impact, Philanthropy, Having Too Many Words and Lots of Dreams

February 14, 2024

Ron Gura on Empathy, Grief and Scaling Humanity

February 14, 2024

Aryeh Bourkoff on Empathy, Digital Communities, Relationships vs. Transactions, Being Self-Aware, Dreaming about Being Shy and Lazy

February 14, 2024

Stefan Tompson on Founding Visegrad24, Fake News, Fighting for Israel and the West

February 14, 2024

Bradley Tusk on Investing Like a Politician in Regulated Industries, the Flaws of Elite Education, and the Fragmented Future of America

February 14, 2024

Bradley Tusk on How to Save Democracy Before it’s Too Late, the Problem With Our Political Structures, and the Power of Mobile Voting

February 14, 2024

Ben Lang on Being Early at Notion, How to Build Community, Angel Investing and Taking Risks

February 14, 2024

Beezer Clarkson on How to Get a “Yes” from Investors, Advice to Young VCs, Fund Mission vs. Returns, and What Diversity Means in Venture

February 14, 2024

Alon Arvatz on the Future of Israeli Cybersecurity, Exiting Multiple Companies, Leaving Money on the Table, and Making Financial Literacy Accessible

February 14, 2024

Michael looks back on the incredible and tragic stories from October 7th within the Israeli tech ecosystem that were shared on the Invested podcast over the past year.

February 14, 2024

Barak Herscowitz on TikTok’s Anti-Israel Bias, the Memo That Got Leaked, Resigning From TikTok, and the Story Israel Needs to be Telling

February 14, 2024

Antonio Garcia-Martinez on the Hippie Influence over Silicon Valley, Behind the Scenes at Facebook, Writing ‘Chaos Monkeys’ and Founding Spindl

February 14, 2024

Alex Konrad on Covering Israeli & Gazan Tech, the Forbes Midas List, & Civic Resilience

February 14, 2024

Orit Farkash HaCohen on AI Regulation, Burnout, Israel’s Natural Gas Exploration, and Diversity in High-Tech

February 14, 2024

Recommended

placeholder image

Three founders, one impossible idea, and Apple’s second-largest acquisition ever. One mistake, and a few learnings.

February 14, 2024
placeholder image

New Space Meetup Join us for a meetup featuring the latest developments in Space tech and ventures, with speakers from the IDF, Utilis, Effe

February 14, 2024
placeholder image

Apply to Aleph.bet B2B Pricing

February 14, 2024
placeholder image

The Future of Talent Why is it so hard to recruit engineers? Maybe we’re going about it all wrong. Avishai Ish-Shalom, Aleph’s Engineer-in-R

February 14, 2024
placeholder image

HoneyBook Raises $28M Series C Read Michael Eisenberg’s blog post about HoneyBook’s journey to cracking a new market, now servicing solopren

February 14, 2024
placeholder image

Firewall Podcast Comes to Israel

February 14, 2024
placeholder image

Usability Testing 101

February 14, 2024
placeholder image

What I Hate About High Tech When Erica Marom joined Aleph, she fell in love with the energy of the Israeli high tech community. But as a for

February 14, 2024
placeholder image

Beyond Innovation Tourism

February 14, 2024
placeholder image

30 Time Management To-Dos We all struggle to manage our time better, so Eden Shochat has prepared a cheat sheet to ease our path to producti

February 14, 2024
placeholder image

A Seat at the Table As Nexar’s GM Mobility Solutions, Kate Balingit partners with the world’s largest auto manufacturers. She shares five le

February 14, 2024
placeholder image

Aleph III We are pleased to announce the closing of Aleph III, a $200 million fund, to continue partnering with great Israeli entrepreneurs

February 14, 2024
placeholder image

Aleph DreamBuilders Erica Marom shares the philosophy behind the creation of Aleph DreamBuilders, a new video series capturing our founders’

February 14, 2024
placeholder image

Don’t Bail Out Israeli Startups In an op-ed in Israeli newspaper Calcalist, Adam Fisher of Bessemer Venture Partners, Rona Segev of TLV Part

February 14, 2024
placeholder image

Dear Entrepreneur Amid the COVID-19 pandemic, Aleph Equal Partner Michael Eisenberg writes in an open letter that this is a time for empathy

February 14, 2024
placeholder image

The Houseparty Exit A year on, Eden Shochat shares a personal perspective on the sale of Houseparty, why exits are not black and white, and

February 14, 2024
placeholder image

$LMND — Aleph’s First IPO Lemonade is a meaningful company with values at its core, a reflection of its founders, Daniel Schreiber and Shai

February 14, 2024
placeholder image

Investing in Great People Michael Eisenberg says investing in great people has always been his strategy, but keeping small business in busin

February 14, 2024
placeholder image

Branding for Founders Uri Ar and Erica Marom, aka Urica™, share the 3-step practical guide they use as a basis for helping startups build th

February 14, 2024
placeholder image

Why You Need Sales Ops Sales Operations is misunderstood, says Leore Spira, head of Sales Ops at Syte, and formerly of Anodot. Once consider

February 14, 2024
placeholder image

Melio Raises $250M

February 14, 2024
placeholder image

A Deal of Biblical Proportions Michael Eisenberg attended the first-ever Business Summit of the Abraham Accords in Abu Dhabi, and says lsrae

February 14, 2024
placeholder image

Eran Shir, Nexar — Aleph DreamBuilders Nexar CEO Eran Shir says he never met a pessimistic entrepreneur. Intrigued by the challenges of mobi

February 14, 2024
placeholder image

Aleph’s New Partner

February 14, 2024
placeholder image

Bringg Raises $100M at $1B Valuation Bringg, a last-mile delivery platform for retailers, helps companies scale up and optimize their custom

February 14, 2024
placeholder image

Games and the Economy Digital games teach us that economies should be expressions of culture, says Michael Eisenberg. He explores the econom

February 14, 2024
placeholder image

Michael’s New Book The most successful companies of the 21st century are values-driven, because the best way to be successful is to make oth

February 14, 2024
placeholder image

Invest in Relationships In an era of transactional investing, Michael Eisenberg says relationships are both scarce and proprietary. Reflecti

February 14, 2024
placeholder image

Aleph’s Latest Investment

February 14, 2024
placeholder image

Announcing Fund IV We are pleased to announce Aleph IV, a $300 million fund. We are humbled and honored by the ongoing support of our LPs, t

February 14, 2024
placeholder image

Trullion Raises $15M Trullion raised $15M in a Series A funding round, as they continue to create business transparency by powering real-tim

February 14, 2024
placeholder image

Placer Raises $100M

February 14, 2024
placeholder image

anecdotes Raises $25M

February 14, 2024
placeholder image

RiseUp Raises $30M

February 14, 2024
placeholder image

Coralogix Raises $142M

February 14, 2024
placeholder image

Unit Raises $100M

February 14, 2024
placeholder image

Healthy.io Receives FDA Clearance Healthy.io received landmark FDA clearance for Minuteful Kidney, holding promise to reduce dialysis rates

February 14, 2024
placeholder image

Raftt Emerges From Stealth

February 14, 2024
placeholder image

Agora Raises $20M

February 14, 2024
placeholder image

SecuriThings Raises $21M

February 14, 2024
placeholder image

AutoLeadStar Raises $40M

February 14, 2024
placeholder image

I am a very disorganized person. There. I said it. Whenever I make this point, though, people I work with smirk and say, “Yeah, right, sure

February 14, 2024
placeholder image

Almost all blogs on budgets begin with some statement on how tedious yet important budgets are. This one is no different (Budgets are tediou

February 14, 2024
placeholder image

The Community Factor Tamar Abramson, Community Lead at RiseUp, shares how their community led to massive product adoption and real social ch

February 14, 2024
placeholder image

A few weeks ago, Eden prompted a casual Facebook conversation about a cap table he reviewed that was beyond repair. Our community expressed

February 14, 2024
placeholder image

This is a story about a spreadsheet that started on a whim. I keep a list of seed investors that I’d send to entrepreneurs when I thought th

February 14, 2024
placeholder image

This blog post was written with Yam Goddard following a meetup for the Aleph portfolio designers community.

February 14, 2024
placeholder image

Time spent on optimizing time is time well spent. I’m writing this post to share how I manage my time, which is usually split between managi

February 14, 2024
placeholder image

Most people do not have enough information to properly evaluate the offer they receive, compare it against industry benchmarks, negotiate it

February 14, 2024
placeholder image

In my previous post (see here) I discussed the ramifications of the Israeli 102 Section approach to taxation of stock option grants and its

February 14, 2024
placeholder image

An employer brand is defined as a company’s reputation as a place to work and its employee value proposition. Big firms spend millions of do

February 14, 2024
placeholder image

A How To Guide on how to maximize user testing for interpreting analytics and evaluating decisions.

February 14, 2024
placeholder image

Five lessons I learned from bringing new technologies to the world’s biggest companies

February 14, 2024
placeholder image

There are very few lawyers who actually love their job (or admit they do). I was one of them. Five years ago, I was working as a tech lawyer

February 14, 2024
placeholder image

Every couple months, we hold a workshop where executives of larger (10M+ users) companies share their experiences and best practices with a

February 14, 2024
placeholder image

Every time Aleph General Partner Tomer Diari dropped the words “crypto” or “Web3'’ into a conversation here at Aleph, he would be greeted wi

February 14, 2024
placeholder image

A year ago I returned from maternity leave with my second daughter, right in the midst of the Coronavirus pandemic. After months of homescho

February 14, 2024
placeholder image

We, Aleph, were the trigger for Sarah Lacy’s thought-provoking post on the premature celebration of Israel’s arrival as an internet/mobile “

February 14, 2024
placeholder image

As Aleph’s one year anniversary approaches, we wanted to publicly revisit our commitment to our limited partners and entrepreneurs. Our five

February 14, 2024
placeholder image

We are thrilled to announce that Eran Shir will be joining Aleph as an Entrepreneur In Residence.

February 14, 2024
placeholder image

Oona Rokyta (Image courtesy of Waggener Edstrom) I have a default setting. It’s to convey to people how they may be seen by others. For as l

February 14, 2024
placeholder image

What a beautiful coastline to host some of the greatest entrepreneurs out there today. On Israel Independence day last week, the press repor

February 14, 2024
placeholder image

If you are reading this and know who Conduit or Wix are for more than 2 years, you are probably Israeli. In fact, as I sat on their boards,

February 14, 2024
placeholder image

You have probably heard of Google I/O. Unless you are an Israeli “Maker” like me, you probably have not heard of Geekcon. So why was I invit

February 14, 2024
placeholder image

I have spent much of the last 20 years of my life investing in entrepreneurs trying to change the world through innovation. These entreprene

February 14, 2024
placeholder image

A few months ago, I talked about Geekcon and what a special experience it is every year for geeks to come together, learn from each other th

February 14, 2024
placeholder image

This post was written in light of the Aleph.bet: Creating your Startup Culture which took place in March 19th, 2015.

February 14, 2024
placeholder image

Some things take a while but are well worth the wait. We spent a year trying to convince Ron Gura to join the Aleph family as an entrepreneu

February 14, 2024
placeholder image

As you start down the path of growing a huge company that enters all sorts of markets successfully, you’re going to be thinking about all so

February 14, 2024
placeholder image

We are looking for a product & data buff for an internship role at Aleph. When Michael & I set out to create Aleph, we decided to make it an

February 14, 2024
placeholder image

There’s got to be a science to pitching that will secure you buckets of funding, right? Add x and y, multiply by 5 and divide by 2.7 to arri

February 14, 2024
placeholder image

We recently brought Marty Cagan over to Israel to work with some of our companies. Marty has a 30 year history in defining and building prod

February 14, 2024
placeholder image

For those still not familiar with Aleph.bet, it is our growth workshop. Every two months or so we hold a session where executives of larger

February 14, 2024
placeholder image

This week saw a remarkable display of action from Israel’s high tech entrepreneurial leaders to save our community’s reputation from irreput

February 14, 2024
placeholder image

I am sure any engineer or executive at a Unicorn company or a tech company valued at hundreds of millions of dollars is sitting around askin

February 14, 2024
placeholder image

I’m hardly a feminist. I admit that in law school I viewed all the feminist law/philosophy classes as “out-of-date” and targeted at angry wo

February 14, 2024
placeholder image

Aaron shares Aleph’s core values and hallmark qualities: a tireless work ethic on behalf of entrepreneurs; Zionism; business ethics; transpa

February 14, 2024
placeholder image

When I chose the name for my personal blog (sixkidsandafulltimejob), I looked for something that would be a conversation starter. I realized

February 14, 2024
placeholder image

We are a network in service of entrepreneurs. A network that builds services for entrepreneurs. Having just celebrated another anniversary a

February 14, 2024
placeholder image

In Pirkei Avot (“Ethics of the Fathers”, part of the Jewish oral tradition), there is an outline of some different ways that people acquire

February 14, 2024
placeholder image

Over the last decade, we have seen the “Startup Nation” story become a main Israeli narrative, worldwide. The world’s opinion of Israel is t

February 14, 2024
placeholder image

Please join me in welcoming two new members to the Aleph family: Nadav “Wiz” Weizmann as an Entrepreneur in Residence and Mor Sela as an Eng

February 14, 2024
placeholder image

Not long after news of Intel’s acquisition of Mobileye leaked, the inevitable melancholy about the selling out of scale-up nation hit my Fac

February 14, 2024
placeholder image

A few years ago, I dedicated my life to the first 2 years of the wonderful Onavo. I thought I knew what Founders, Startups and Venture Capit

February 14, 2024
placeholder image

In this new capacity, I’ll be joining the company’s global leadership team and among my many missions ahead I am establishing WeWork’s produ

February 14, 2024
placeholder image

In How to Castrate a Bull: Unexpected Lessons on Risk, Growth, and Success in Business, NetApp founder, Dave Hitz, skillfully retells the ba

February 14, 2024
placeholder image

When I joined Google in 2010, I was shocked to learn that every secret project the company was working on, every strategy, almost every plan

February 14, 2024
placeholder image

Almost two years ago, my wife and I went to a highly enjoyable Bon Jovi concert in Park Hayarkon, Tel Aviv. The band was in top form and pla

February 14, 2024
placeholder image

Six weeks ago, Tzvika, our head of platform encouraged me to include a small link in my email signature asking “Can Aleph help you make Aliy

February 14, 2024
placeholder image

When I first entered the Israeli venture capital business some two decades ago, there were certain canonical beliefs about how to build a su

February 14, 2024
placeholder image

A year ago, we decided to pack our office on Rothschild 32 and move for one year to WeWork’s then recently opened office in Sarona. We wante

February 14, 2024
placeholder image

The last week has been a whirlwind at the Aleph portfolio. Lemonade, which is now a household name in insurance and not beverages, announced

February 14, 2024
placeholder image

Sometimes you need to get lucky. But as Jim Collins says in Good To Great, “The critical question is not ‘Are you lucky?’ but ‘Do you get a

February 14, 2024
placeholder image

This series was written in light of a talk I gave at the Aleph.Bet: Building a Successful SaaS Business workshop which took place on Februar

February 14, 2024
placeholder image

TLDR: After 4 years of activity, 4514 questions, 23856 answers and 6614 appreciations, we have decided to shutdown the Karma app as we know

February 14, 2024
placeholder image

This is the second part of our two-part series focused on exploring metrics for when you’re in the growth stage of building your start-up.

February 14, 2024
placeholder image

Hi everyone — my name is Uri Ar and I’m joining Aleph as an EIR. I think the E is supposed to stand for Experience Designer. Right now it st

February 14, 2024
placeholder image

They come from 10 different countries and speak 8 languages. Now they want to help Israeli startups succeed in the global market.

February 14, 2024
placeholder image

Startup founders often underestimate the importance of preparing for an announcement and think they can hire a PR firm to do all the heavy l

February 14, 2024
placeholder image

Once upon a time there was a girl who loved telling stories. Before she even knew how to write, she would dictate them into her father’s dic

February 14, 2024

Recommended

placeholder image

Founders: Stop Selling SaaS and Start Selling Services

February 14, 2024
placeholder image

The IPO Boom is Over. What Should Your Startup Do Instead?

February 14, 2024
placeholder image

February 14, 2024
placeholder image

Starting a vertical SaaS, or vertical software company, can be a great move for entrepreneurs who want to avoid fierce competition and crowd

February 14, 2024
placeholder image

“Companies are going to start running out of money," says Yael Elad, in the latest video of our Partner Perspective series. "And they are go

February 14, 2024
placeholder image

How did sales cycles change in the second half of 2022 and what was their effect on B2B businesses? “So all in all, the fact that sales cycl

February 14, 2024
placeholder image

“I’m trying to come up with the next generation Twilio, the next generation Auth0, the next generation Stripe…” says Tomer Diari, in the lat

February 14, 2024
placeholder image

“Companies should focus on capital conservation, which they can do in two ways,” says Yael Elad, as she discusses the short and long-term im

February 14, 2024
placeholder image

What could you accomplish with 100 super capable employees that have the full knowledge of the world, and that you can train to do anything

February 14, 2024
placeholder image

Contrary to popular belief, starting a horizontal SaaS sets your company up for all kinds of obstacles and challenges - not the least of whi

February 14, 2024
placeholder image

Businesses close. It's an unfortunate but common reality of entrepreneurship, and not something to be ashamed of. As Yael Elad, Operating Pa

February 14, 2024
placeholder image

Where to incorporate is a decades-old question, and it's been asked by Israeli founders since the tech industry has existed. In this Partner

February 14, 2024
placeholder image

Right now, every company thinks that having a separate AI group is the way to go. That's wrong, says Eden Shochat, VC and Equal Partner and

February 14, 2024

Recommended

placeholder image

Apple Acquires Q.AI in Second Biggest Acquisition Ever: the Story of Q

February 14, 2024
placeholder image

Web3 Experts on The Future of Web3 Security: How to Hack the Unhackable, and What We Can Do About it

February 14, 2024
placeholder image

Here's Your Playbook for Building a Vertical Software Company That Captures Your Market

February 14, 2024
placeholder image

Meet Ampliphy: Automating Serendipity at Scale

February 14, 2024
placeholder image

The State of Tech Investing in Israel, with Michael Eisenberg | CNBC Money Movers

February 14, 2024
placeholder image

The Future of AI

February 14, 2024
placeholder image

How to Cultivate a Startup Culture of Growth (Hebrew)

February 14, 2024
placeholder image

What it Really Means to be an Entrepreneur | Our Founders Share Their Stories

February 14, 2024
placeholder image

[Teaser] Aleph: Tales from the Crypto - Incentive Systems in Web3

February 14, 2024
placeholder image

[Full Panel] Aleph: Tales from the Crypto - Incentive Systems in Web3

February 14, 2024
placeholder image

Aleph – Urica Employer Branding Workshop Clip

February 14, 2024
placeholder image

Michael Eisenberg Talks to Oxford MBA Students - April, 2020

February 14, 2024
placeholder image

Aleph in Shorts: Yael Elad - How to Extend Your Runway in Times of Crisis

February 14, 2024
placeholder image

Michael Eisenberg and Jonathan Medved at OurCrowd Summit 2020

February 14, 2024
placeholder image

Eden’s Guide to Time Management

February 14, 2024
placeholder image

Productivity Friends

February 14, 2024
placeholder image

Productivity Enemies

February 14, 2024

Recommended

placeholder image

Shai Wininger, Co-Founder & President of Lemonade, joins Almost Human to break down what it actually takes to build AI-native products in the real world. From 1-person engineering teams to specs written as tests, Shai shares how Lemonade rebuilt its stack, org, and workflows so AI isn’t a demo, it ships.

February 14, 2024
placeholder image

SMBs are the biggest AI opportunity if you focus on workflows, not models. Eden & Monday.com’s Assaf Elovic on AI that users trust.

February 14, 2024
placeholder image

Facetune’s founder is rebuilding video. Eden talks with Lightricks CEO Zeev Farbman on real-time video AI, edge vs cloud, and fast-decaying models.

February 14, 2024
placeholder image

When AI runs the business. Eden Shochat with Daisy CTO Nir Hemed on autonomous agents, property management, and human–AI collaboration.

February 14, 2024
placeholder image

Meta-learning: AI that learns how to learn. Eden Shochat with Liran Tam on shipping faster, tiny data, less compute, and beating Big Tech.

February 14, 2024
placeholder image

February 14, 2024
placeholder image

AI isn’t just chatbots. Eden Shochat with BuzzFeed’s Gilad Lotan on embeddings, vectors, personalization, and the future of adtech.

February 14, 2024
placeholder image

AI hype is loud. Almost Human cuts through it. Eden Shochat on real AI shifts, tools that matter, and founders shaping Israel’s tech future.

February 14, 2024
placeholder image

AI agents, human escalation, autonomous companies.

February 14, 2024

Recommended

placeholder image

Gio from Rio: On Leaving Modeling for Marketing, LGBTQ Life in Israel, and Having Tri-Lingual Kids

February 14, 2024
placeholder image

The Good, Some Bad, And Some Ugly: Avi Mayer, Former Editor-In-Chief Of The Jerusalem Post, On October 7th Journalism, Tel Aviv Vs. Jerusalem, And Feeling Connected To Pot Holes

February 14, 2024
placeholder image

NYT Bestselling Cookbook Author Adeena Sussman on Finding Love (of Tahini and People) and Why It Matters That the Coffee Shop Knows Her Order

February 14, 2024
placeholder image

Wounded in Gaza, Still Fighting for Israel: Aaron Bours’ Story of Resilience, Loss & Purpose

February 14, 2024
placeholder image

Abbey and Erica record an unusual and very real episode of Yalla, Let’s Go! in the middle of the February 2026 war with Iran.

February 14, 2024
placeholder image

From Mumbai to Missiles: Revital Moses on Making Aliyah, Building a 90K YouTube Community & Bridging India and Israel

February 14, 2024
placeholder image

Aleeza Ben Shalom, Host of Netflix’s Jewish Matchmaking, Says Israel is the Best Place to Find Love

February 14, 2024
placeholder image

Avi Lewis, Software Engineer at Meta, Says This is How to Land a Tech Job in Israel

February 14, 2024