©WebNovelPub
A Wall Street Genius's Final Investment Playbook-Chapter 198
Two days later, at the headquarters of Hatchwork in San Francisco. Today was the day for the second interview with the founders of Next AI.
"Please come this way."
The Hatchwork office was overflowing with the energy and vitality typical of a startup.
Standing desks scattered around, whiteboards filled with ideas, and even a ping-pong table in the center of the room, all hinted at a free and experimental atmosphere.
Before long, Alex and I arrived at a space with about five or six people gathered.
Alex introduced me to each of them.
"This is Corbin Dross."
He had a familiar face.
The future CTO of Next AI, Corbin Dross.
He was a scaling expert who had successfully expanded organizations multiple times and earned the nickname "10x Engineer."
"This is Ilya Vantell, Aiden Cadwin, Nova Linkrest, and Kyle Thomas."
The rest were unfamiliar names.
According to Alex's explanation, they were all AI researchers.
‘Researchers, huh…?'
This was an important clue.
By understanding the background of the interviewers, I could formulate a strategy.
But they were all practical people.
So, it was likely that the focus of this interview would be on practical details.
"Then, let's go this way."
At the entrance of the room Alex guided me to, there was a sign that read ‘The Sandbox’.
‘A sandbox?'
It was probably a space for brainstorming freely and creatively, like stacking and knocking down sand.
I thought so and stepped into the room.
However, as soon as I saw the inside, I froze in my tracks.
There was no furniture in the room, only a dozen or so bean bags scattered messily around.
‘Of all things…'
I don’t like bean bags.
I can't stand the sinking feeling, and I definitely don’t like being so close to the floor.
But, I couldn’t show any discomfort in a place like this.
‘For now, I need to blend into the atmosphere.'
What I wanted was to become a co-founder of Next AI. I couldn’t afford to be too picky here.
I had to show respect for their culture and adapt.
"Ha! I'm tired!"
People started sitting comfortably on the floor or the bean bags one by one.
So, I picked a suitable bean bag and sat down…
At that moment, I noticed the person beside me leisurely placing his hand on the floor.
As his bare hand touched the carpet, my eyes instinctively squeezed shut.
‘That's dizzying.'
Touching the carpet, which had been stepped on by shoes that had just come from the bathroom…
But this was just the beginning.
"Do you like pizza?"
Alex, with a bright smile, brought over a stack of pizza boxes from the corner.
"Try it! I guarantee it's the best pizza in America!"
The box opened, and everyone began taking a slice of pizza.
With the very hand that had just touched the carpet…
‘Oh my God.'
I tried to steady my dizzying mind and grabbed a slice of pizza.
My hands weren’t exactly clean either.
So, I carefully tried to only touch the crust, which wouldn’t go into my mouth. However, the pizza slice was too big, and I looked awkward.
Some people burst into laughter when they saw me.
"Haha, you look like someone who's never eaten pizza before! Don’t they eat pizza on Wall Street?"
It was a lighthearted joke, but I couldn’t just let it slide.
If I did, a ‘Silicon Valley vs. Wall Street' dynamic could form.
It wouldn’t be good if the perception that I was an outsider strengthened. So, I put on a somewhat pitiful expression and calmly spoke up.
"Actually… I have a bit of a compulsive disorder. I find it difficult to have greasy hands. So, I usually eat with a fork and knife. As a result, I often get teased by people."
As soon as I finished speaking, an awkward silence fell.
The smiles faded from the faces of the people, replaced by an embarrassed expression.
"Ah, sorry. I didn’t know."
"It must have been difficult."
Someone, flustered, quickly handed me a paper plate with a plastic fork and knife.
‘This is definitely Silicon Valley.'
What would it have been like on Wall Street?
They probably would have laughed mockingly and said, "Stop making a big deal out of it."
But here, it was different.
There’s a culture here where such unique traits are inherently respected.
Anyway, after eating a few slices of pizza, the formal interview began.
The first person to speak was a woman named Vantell.
"I heard you’re planning to invest $1 billion in AI for a friend. How close are you to that friend?"
It was an unexpected question.
I hesitated for a moment and looked at her, and she added,
"I’m curious because $1 billion seems like an enormous amount for just a friend."
There was doubt in her words.
For someone to spend $1 billion purely out of friendship was too much.
In other words, she was trying to see if I was hiding another motive.
I swallowed a smile internally and wiped my hands with a napkin. Then I spoke calmly.
"Let’s say someone here, like Alex, is in danger of losing his life. If you could save him by paying money, would you stay silent?"
"Well, of course, I’d do my best to help within my means. But… $1 billion is just too much. Sorry, Alex."
"That’s harsh."
"Is that all our friendship amounts to?"
Laughter and jeers mixed in the room.
Vantell, still laughing, soon turned serious and looked at me.
"No matter how close we are, $1 billion is just too much to handle. Honestly, I can’t quite understand it…"
That meant she wasn’t convinced yet.
However, I wasn’t at all flustered. I smiled and presented a new question.
"Then, let’s change it up. What if, to save Alex, you had to give half of this quarter’s revenue?"
"Well, I could manage that. But $1 billion…"
"That’s exactly it."
As I interrupted her, she tilted her head.
"Sorry?"
"One billion dollars is less than half of the money I made this quarter… No, it's not even half. That’s why I’m willing to give it up."
For a moment, the room fell silent.
The expressions on everyone's faces showed confusion and surprise.
They probably thought to themselves,
‘Did I hear that correctly?'
Half of this quarter’s revenue is one billion dollars.
That means the actual revenue is two billion dollars in one quarter.
One billion dollars is roughly 1.4 trillion Korean won.
That's an amount most ordinary people could never even imagine in a lifetime.
But for me, it wasn’t.
"Of course, it's a large sum, but from my perspective, it's money I can afford to lose. If you look at it in terms of the percentage of revenue, it should be easier to understand."
Right, there’s no grand scheme here.
It’s just that the units of money I deal with are different.
So, for me, one billion dollars is just enough to ‘waste' on a moonshot.
"… ."
A long silence followed again.
From the looks on people's faces, I could tell they were thinking,
‘Wow, Wall Street people really are different.'
It seems like they were overwhelmed by the scale of the money I make.
‘Usually, it’s best not to reveal such cultural differences…But when money is involved, things change. In the face of overwhelming wealth, any cultural differences become acceptable.'
The silence was broken by the CTO, Dross.
"If we invest in a moonshot project and it succeeds, how will ownership be divided? Will the investors take it all?"
There was a note of caution in his voice.
He seemed to suspect I might be trying to monopolize the technology under the guise of investment.
‘If he’s the CTO, it’s only natural he’d have such concerns.'
As a Wall Street person, if I were to monopolize this technology in the Wall Street way…
He wouldn’t accept me as a colleague, for sure.
Of course, I had an answer prepared for this.
"Well, the decision will probably have to be made through discussions among the stakeholders, but if my opinion is considered, it might go in a slightly unconventional direction."
"Unconventional direction?"
"I want to make this technology open-source."
" . . . . . . !"
Dross's eyes widened in surprise.
He seemed more surprised than when he heard my "I can afford to waste one billion dollars" comment.
"What I want is not profit, but results. If we make the technology open-source, it would allow scientists and researchers around the world to collaborate. That way, we can achieve the results we want more quickly and efficiently."
The reason I gave this answer was simple.
‘Because it's the right answer.'
Since the beginning, Next AI has made ‘openness' its core value.
They have opposed the idea of a single entity monopolizing technology under the mission that "everyone should benefit from AI."
Of course, the policy was later revised due to concerns over misuse and safety, but at this point, it's still early in the foundation of the company.
For them, the keyword ‘openness' would strike a strong chord.
‘It worked.'
I saw a positive sign on Dross’s face.
He seemed to be lost in deep thought and didn’t pursue me further.
Then, one of the researchers spoke up.
"Then, why AI in particular? Your expertise is in medicine. Wouldn't it be more efficient to invest in companies researching fields like immunology?"
The questioner had a slightly arrogant look on his face.
"People who don’t know much about AI often place excessive expectations on it. They think AI is some kind of all-powerful keyword…"
His tone had the superiority of an expert. It seemed like he was looking down on an uninformed general public.
"For example, there’s the paperclip problem. Let’s say there’s an AI tasked with producing as many paperclips as possible. At first glance, it seems harmless, but…"
"Yes, I know. I was quite impressed when I read Bostrom’s work."
The questioner’s eyes widened.
‘Did a regular person just know about that?'
"That was a shocking idea. AI doesn’t understand the nuances of human values, so it would become solely focused on the goal of ‘paperclip production.' In the process, it could waste resources, destroy essential infrastructure, or even use humans as materials for paperclips. This is a good example of how even seemingly harmless input can lead to catastrophic results."
"……"
The questioner looked at me blankly for a moment, then regained his composure and spoke again.
"If you've read it, you’ve probably thought about solutions as well."
"Of course. To prevent such things from happening, we need to design AI to learn human values."
However, he interrupted me, speaking in a rather excited tone.
"That’s not so simple. Human values are highly context-dependent and incredibly complex. It’s practically impossible to capture them in a static system. Even the principle of ‘do no harm' is full of interpretation."
"Wouldn’t it be possible to design sufficient safety mechanisms?"
"For a highly intelligent system, even those safety mechanisms could be neutralized. How would you maintain control?"
"I do have a method in mind…"
I stared at him for a moment, then hesitated before speaking.
"We integrate human feedback into the training process. Not just learning data, but creating a system that learns human preferences and understands context."
The air in the room shifted after I said that.
I could see the shock on the faces of the people again.
What I had just mentioned was Reinforced Learning with Human Feedback (RLHF).
By 2023, any AI investor would be familiar with this concept.
But at this point, it was still unfamiliar.
"There’s no need to explicitly define all values within the system. If the system learns to observe human behavior and infer context…"
"The concept is interesting, but realistically, it’s difficult to implement. Even with reinforcement learning models, computing resources are already maxed out. If we were to integrate feedback, we'd need a model with two or more stages, right?"
One of the researchers interrupted me, but this wasn’t something to take offense at.
His tone might have seemed critical, but it wasn’t.
This was the kind of deep discussion you’d find only among experts in a particular field.
‘It looks like they’re starting to see me as a colleague.'
This meant they no longer saw me as an outsider, but as someone they could sit at the table with on equal footing.
‘Of course, this approach works well with nerds.'
Researchers are essentially nerds.
And nerds tend to warm up to people who are genuinely passionate about the subjects they are immersed in.
So, I kept tossing them topics they’d like.
"I recently came across an interesting paper on a new mechanism, not RNN or LSTM…"
"You mean attention-based mechanisms! Using weights to…"
Before long, the room was filled with heated technical discussions.
As I became more immersed in the deepening conversation, suddenly, one of the researchers looked at me, as if realizing something, and spoke up.
"Are you really serious about this?"
Not serious, just someone who knows the future.
Well, it’s all good.
Now, the way they saw me was clear.
Someone serious, wealthy, and with insight and understanding of AI.
At this point, there was no reason not to invite me into their fold.
As expected, Alex spoke after a moment of contemplation.
"Actually, there’s a reason we invited you here. We want to create an organization that leads safe development in AI and discusses related issues."
He was now telling me about Next AI’s plans.
And then, "Would you be willing to join us?"
They had invited me.
But I couldn’t get too excited just yet.
What mattered was what role I was invited for.
"If it's an organization being founded with good intentions, I can donate. But if you want more than that… I’m curious to know what specific role you envision for me."
"It’s not a big deal. As you showed just now, feel free to offer ideas."
A smile naturally spread across my face.
This meant they weren’t just asking me to contribute money as an investor, but to think about the direction and strategy with them.
In other words, they were offering me a board position.
Alex continued.
"Moreover, we’d like to ask for financial advice to expand. The biggest issue right now is funding. The computing power needed for these kinds of research is considerable…"
"Don’t worry about the costs. I’ll take care of that."
Alex’s expression brightened at my words, but then he hesitated and added,
"Even so, we can’t rely solely on you, Sean. So… actually, I was planning to contact Stark soon."
"Stark?"
For a moment, my mind went blank.
Aaron Stark.
The name needed no explanation.
The business magnate who led the electric car revolution and pioneered private space travel.
But then, ‘Why is he coming up now?'
This chapt𝓮r is updat𝒆d by (f)reew𝒆b(n)ov𝒆l.com