Learning Management with OpenAI: How was the strongest product of 2023 created?

Wallstreetcn
2023.11.16 09:45
portai
I'm PortAI, I can summarize articles.

OpenAI is not the kind of place where there are still a bunch of people working at 2 a.m.

Introduction to OpenAI and Myself

OpenAI is the creator of ChatGPT. I hope you have already used ChatGPT to learn new knowledge, help you with writing, or simply chat with it for entertainment. ChatGPT is just one of OpenAI's products. OpenAI has also launched DALL-E 3 (image generation), GPT-4 (advanced model), and OpenAI API (used by developers and companies to integrate AI into their own businesses).

As for myself, I am Evan Morikawa, the Head of the Applied Engineering Team at OpenAI. My role involves overseeing the development and implementation of various features and functionalities in ChatGPT and other OpenAI products. I work closely with a talented team of engineers, researchers, designers, and product managers to continuously improve and enhance our AI technologies. The team responsible for developing these products, including engineering, product, and design, is internally referred to as the "Application Team." OpenAI's goal is to build safe AGI that is useful to all of humanity. The Application Team's goal is to create products that truly benefit humanity through artificial intelligence.

The research department trains large-scale models, and then the application department builds products such as ChatGPT and API based on these models. In practice, this is a more closely integrated process, which I will explain in detail later.

The Application Team is a new addition to the company. OpenAI was founded in 2015, while the Application Team was established in the summer of 2020. We formed this team because we wanted to build and expand an API around GPT-3, which was a model we had just finished training at the time."

How did you become the head of the OpenAI Application Team?

I joined OpenAI in October 2020. At that time, OpenAI had about 150 employees in total, and the Application Team had only a few people. Almost everyone working at OpenAI was a researcher. I didn't have a Ph.D. in machine learning, but I realized that OpenAI was building an API and an engineering team, which excited me a lot.

At OpenAI, I initially started as a contributor to the GPT-3 API, writing code on my own. In January 2021, a few months after I joined, I transitioned to managing our application engineering team. At that time, my team had about 6 people. Now, two and a half years later, the application engineering team has grown to 130 engineers, and I manage half of them. The entire Application Team has about 150 people, with an additional 20 project managers and designers.

Since October 2020, OpenAI has grown from about 150 people to approximately 700 people today. We are still continuing to hire.

How did OpenAI grow so rapidly? It seems like you release significant new features every few months. As an outsider, it's already quite challenging to keep up with your progress!

On the one hand, it is undoubtedly the fastest-paced place in my career, but on the other hand, it's not magical either. I believe the key factors are:

  • Operating ChatGPT like a small independent startup
  • Close integration of research and application
  • Long-term product and research thinking
  • Progressive releases
  • High density of talent
  • Accumulated habits over time

You mentioned that ChatGPT operates like an independent startup. Can you explain how that works?

ChatGPT looks, feels, and behaves like a product from a newly established startup that has only been around for a year, but OpenAI itself has nearly 8 years of history. The internal application team at OpenAI was formed 3 years ago. ChatGPT is a product team within the Application Team and was established about a year ago. Myself and other members of the application team's leadership hope that the ChatGPT team can feel like they are their own independent startup company. In practice, this goal has undergone many changes as the team has grown.

In the summer of 2022, we began developing ChatGPT. At that time, the application team consisted of about 30 engineers, several project managers, and designers. The existing products included the GPT-3 and Codex API interfaces, model fine-tuning, embedding applications, DALL-E 2, and more.

All of these products used the same codebase, ran on the same cluster, and utilized the same build pipeline. Within the application team company, our structure was divided by function, with the engineering department being a unified team.

The emergence of ChatGPT changed this situation.

Several application engineers, designers, researchers, and Greg Brockman (OpenAI's President and Co-founder) gathered in a room to rapidly iterate on product ideas.

We provided this new team with their own codebase and a brand new cluster. The development environment resembled the early stages of a startup or personal project.

Our goal for this small ChatGPT sub-team was to create an atmosphere similar to that of an early-stage startup, where they could iterate continuously and find product-market fit. Each member of the team worked in the same office, and we rearranged the seating to have everyone sitting close to each other.

As the ChatGPT team grew, we ensured that the team remained vertically integrated. This meant that engineering, product, design, and key researchers always collaborated closely.

In May 2023, Peter Deng (OpenAI's Vice President of Consumer Products) joined ChatGPT to lead the engineering, product, and design teams, further deepening this model.

We knew to organize the ChatGPT team in this way because we established a similar structure when the application team started developing the first version of the API interface. Three years ago, we also started from a new codebase, a new cluster, and a new development environment with just a few engineers. Our operation resembled that of an early-stage startup searching for product-market fit and found it through API products.

This "fractal startup" approach is a great model for any new product category. I hope we can continue to adopt this approach and iterate quickly on the new ideas we consider.

How important is the close integration with research, and why?

In most technology companies, the "classic" three-person team that collaborates with the engineering department is often referred to as "EPD" (Engineering, Product, Design):

These teams often collaborate extensively, with members from engineering, product, and design working together in cross-functional teams. Integrating research into product teams is crucial. Instead of the classic "EPD," I prefer to refer to our closely collaborative team as "DERP" (Design, Engineering, Research, Product):

At OpenAI, many product issues are actually research problems. For example, these problems can be seen as feature requests: How can we make ChatGPT generate more concise outputs? How can we make ChatGPT generate more accurate answers? How can ChatGPT connect to more data sources?

Although these problems may appear to be product-related, they heavily rely on research. How can we adjust or fine-tune the base model to achieve the desired goals? The combination of researchers and product engineers is not always inevitable. At OpenAI, research and application are two separate organizational structures. Within the research organization, there are various research teams.

For example, there is a pre-training team responsible for training the GPT-4 model, as well as a fine-tuning team responsible for fine-tuning GPT-4 after training. There are also calibration teams and multimodal teams that enable GPT-4 to see, hear, and speak, among others.

Researchers often have significant academic or industry backgrounds. They read a large number of academic papers to stay updated on the latest technologies. They also come up with ideas and conduct numerous experiments to improve our models. They are hands-on; researchers need to do a lot of engineering design and write a significant amount of code!

We could have chosen to "throw the model over the wall" and let the application team productize it. However, we want to avoid a culture where the research department focuses solely on experimentation while the product department focuses solely on commercialization and making money.

To avoid this situation, product teams like ChatGPT are composed of software engineers, designers, product managers, and researchers.

In the case of ChatGPT, most researchers come from our research team known as "post-training." These researchers are proficient in the latest fine-tuning techniques and reinforcement learning (RL) methods, such as Proximal Policy Optimization (PPO). Because these researchers are also part of the product team and conduct their own A/B experiments, the feedback loop between research and engineering is very tight.

The close coupling with the research department is the reason why we can launch new ideas so quickly. How were we able to rapidly deploy browsing, code execution, plugins, and other ChatGPT features? It's because of the tight integration between teams!

All of this starts with research ideas, and the ability to quickly deploy them into production is due to the close collaboration between the research and engineering teams! Additionally, there is a culture of patching and prototyping in both the research and application teams. Many prototypes can be quickly put into production.

How does long-term thinking help with product implementation?

OpenAI's mission is to ensure that artificial general intelligence (AGI) benefits all of humanity. The so-called AGI refers to highly autonomous systems that can outperform humans in economically valuable work. This mission is reflected in OpenAI's charter, which provides a detailed overview of the company's strategy, including a strong emphasis on long-term safety.

Our charter and mission are mentioned at almost every all-hands meeting. In product discussions, we strategically use the phrase "which option brings us closer to AGI" to guide our decision-making process, aligning it with our mission.

This not only helps us determine what to build but also leads to many cases where we decide to abandon certain options because they don't align with our mission.

The clear focus is always on driving speed. I believe that our mission helps maintain this focus and paves the way for many new ideas.

Another key aspect is our research team:

Conducting research in parallel helps accelerate the speed of product development. Research work involves constantly exploring unified approaches and building more powerful models. For example, just like with pure text models, we have been conducting research on multimodal capabilities.

The research team also ensures that their work intersects with other research areas. We don't want to create many small models; instead, we aim to move towards AGI.

Conducting multiple research projects simultaneously speeds up our progress. For instance, we were able to launch GPT-4, image understanding, text-to-speech, and speech-to-text functionalities in a relatively short period of time, largely thanks to the synchronized research efforts.

What is OpenAI's strategy for product releases?

We avoid the "big bang" approach and instead follow a strategy of continuous updates. This is not only because "early and frequent releases" is a proven product strategy, but also because continuous updates are fundamental to our safety principles and strategy.

AI safety is an important topic within the company and is at the core of our work. The focus on safety often outweighs the initiation and development of a project. One of our core principles is to learn incrementally from the real world.

We employ various methods to ensure the safety of AI, such as:

Red Teams: A group of security experts, known as the "Red Team," play the role of attackers to test and evaluate the effectiveness of security measures.

Alignment: Ensuring that AI systems behave in a manner consistent with human values and do not inadvertently engage in harmful behavior.

Policy Work: Collaborating with policymakers worldwide to establish specific rules that enhance the safety and trustworthiness of AI technologies and services.

In addition to these efforts, we have also recognized that gradually and controlled exposure of AI to the real world is one of the most important methods for discovering and addressing safety issues.

All our products are carefully monitored and gradually rolled out. Before a product is released, it undergoes trials with selected customers and then gradually expands its user base. An interesting fact is that when the world saw ChatGPT "suddenly" explode, internally at OpenAI, we had actually spent several years developing and fine-tuning ChatGPT in a controlled environment! OpenAI's release of ChatGPT was actually intended to collect feedback on conversational patterns, but this unexpected release unexpectedly became popular.

What kind of people does OpenAI hire and what is their recruitment strategy?

Sam Altman, the CEO of OpenAI, places great emphasis on a high density of talent. This means that the average skills and performance at OpenAI are far above normal levels elsewhere, and exceptional work is the foundation of the company.

An experienced team can deliver products very quickly. Due to the importance of high talent density, we intentionally lean towards hiring senior engineers for our applied teams. Additionally, we strive to keep the team small. It has been proven that this approach is effective, as small senior teams can complete tasks quickly!

As a result, empowering and trusting the team is also easy. For decisions made on the spot, I usually have a confident instinct that these decisions generally point in the right direction. We conduct internal reviews of decisions and write down our thoughts, but we do not follow overly strict formalities.

The combination of founder types in the incubator and those who have worked in and contributed to the growth of large tech companies has been very effective. We need both types of people:

The early iterations of ChatGPT were quite rough internally, and founder types were well-suited for this part of the work; engineering skills to scale complex distributed systems require engineers who have participated in large-scale system expansions.

Whether it's buying productivity tools or building them ourselves, we are decisive.

We understand the importance of having talent familiar with the latest tools used in the startup ecosystem and who can quickly identify problems. Purchasing these productivity tools helps us maintain a small and agile team.

However, we can't always rely on buying and using existing tools. For tools that we consider to be core to our business, we develop them ourselves. Additionally, as we scale, we may also phase out some productivity tools.

Humility is an important quality behind our "technician" title. Everyone at OpenAI carries the title of "technician," regardless of their experience or expertise.

There are several reasons for this. We firmly believe that there is no room for lone wolves who are not good at teamwork in our team. We also want to avoid attracting those who are solely focused on promotion, as they tend to create redundant and complex products in order to elevate their own titles.

We want everyone to communicate well and do what is most beneficial for our mission. As long as we can adhere to this, we will be more focused and able to complete tasks faster.

OpenAI prioritizes safety over speed. When hiring, we pay close attention to AI safety. We do not want to foster a culture where speed takes precedence over safety, as paying attention to safety is crucial in the process of completing tasks.

You mentioned that OpenAI's daily work methods are also one of the factors contributing to the company's rapid development. What do you think is the most important method?

The reason why we iterate so quickly is largely due to the many small details in our work, which can be summarized as follows:

From Monday to Wednesday, the team works in the office. Everyone works at the San Francisco headquarters on Monday, Tuesday, and Wednesday. Face-to-face work helps us accomplish tasks in unexpected ways.

I still remember that during the early development stage of ChatGPT, the content we built often changed. It was changing every day. For example, the GPT large model was evolving every day, and our researchers had to constantly adjust it.

We have made surprising progress by tapping on colleagues' shoulders or joining conversations we accidentally overheard. Of course, we also hold regular sync meetings, and besides scheduled collaboration, we make special progress as well.

We also frequently create whiteboards on the fly. We come up with many new ideas at the lunch table - really! As the team rapidly expands, these impromptu social interactions have become a major boost to onboarding. Without these face-to-face interactions, it would be much more difficult to recruit 120 people within two years.

For some people, Thursday and Friday are meeting-free days. On these two days, there are fewer people in the office. During these days, we focus on more targeted work.

Coordinating daily work is the key to improving our work efficiency. With this coordination, we can reach key work milestones in the first three days of the week.

The work pace is quite intense. The mission, product, and technological impact drive everyone to work hard. However, it should be emphasized that this does not necessarily mean that our working hours are long. OpenAI is not the kind of place where people are still working at 2 a.m. Here, we support each other, and we actively pay attention to each other's professional burnout.

As I mentioned, we tend to hire older and more experienced employees. This also means that there are many parents with children here. Everyone values family time very much. The focus on family also requires the team to be highly focused, prioritize tasks, and flexibly manage time, including myself!

OpenAI seems to have accomplished a lot in a relatively short period of time. So, what's next?

In order to continue maintaining a high delivery speed, we need to overcome many challenges, such as:

The team continues to grow. The larger the team, the more difficult communication and coordination become.

More mature products. As the products we create become more mature, it becomes more difficult to make drastic changes to them.

AI safety issues. These issues will become more severe, not only for us but for the entire industry as well. OpenAI may need to change its previous architecture to address these issues.

We hope to strive to retain most of our current principles: a strong sense of mission, an integrated research team, and a higher density of talent are always good things. Overall, I am cautiously optimistic about my team, company's future, and the potential of artificial intelligence. We believe it is crucial to make more people realize the value of the tools we create for productivity. Right now, we have only just scratched the surface.