ERNIE Bot can generate text, images, and videos based on natural language inputs. It is powered by ERNIE (Enhanced Representation through Knowledge Integration), a powerful deep learning model.
The first version of ERNIE was introduced and open-sourced in 2019 by researchers at Tsinghua University to demonstrate the natural language understanding capabilities of a model that combines both text and knowledge graph data.
In 2021, Baidu’s researchers posted a paper on ERNIE 3.0 in which they claim the model exceeds human performance on the SuperGLUE natural language benchmark. ERNIE 3.0 set a new top score on SuperGLUE and displaced efforts from Google and Microsoft.
According to Baidu’s CEO Robin Li, opening up ERNIE Bot to the public will enable the company to obtain more human feedback and improve the user experience. He said that ERNIE Bot is a showcase of the four core abilities of generative AI: understanding, generation, reasoning, and memory. He also said that ERNIE Bot can help users with various tasks such as writing, learning, entertainment, and work.
Baidu first unveiled ERNIE Bot in March this year, demonstrating its capabilities in different domains such as literature, art, and science. For example, ERNIE Bot can summarise a sci-fi novel and offer suggestions on how to continue the story in an expanded universe. It can also generate images and videos based on text inputs, such as creating a portrait of a fictional character or a scene from a movie.
Earlier this month, Baidu revealed that ERNIE Bot’s training throughput had increased three-fold since March and that it had achieved new milestones in data analysis and visualisation. ERNIE Bot can now generate results more quickly and handle image inputs as well. For instance, ERNIE Bot can analyse an image of a pie chart and generate a summary of the data in natural language.
Baidu is one of the first Chinese companies to obtain approval from authorities to release generative AI experiences to the public, according to Bloomberg. The report suggests that officials see AI as a “business and political imperative” for China and want to ensure that the technology is used in a responsible and ethical manner.
Beijing is keen on putting guardrails in place to prevent the spread of harmful or illegal content while still enabling Chinese companies to compete with overseas rivals in the field of AI.
Beijing’s AI guardrails
The “guardrails” include the rules published by the Chinese authorities in July 2023 that govern generative AI in China.
China’s rules go substantially beyond current regulations in other parts of the world and aim to ensure that generative AI is used in a responsible and ethical manner. The rules cover various aspects of generative AI, such as content, data, technology, fairness, and licensing.
One notable requirement is that operators of generative AI must ensure that their services adhere to the core values of socialism, while also avoiding content that incites subversion of state power, secession, terrorism, or any actions undermining national unity and social stability.
Generative AI services within China are also prohibited from promoting content that provokes ethnic hatred and discrimination, violence, obscenity, or false and harmful information.
Furthermore, the regulations reveal China’s interest in developing digital public goods for generative AI. The document emphasises the promotion of public training data resource platforms and the collaborative sharing of model-making hardware to enhance utilisation rates. The authorities also aim to encourage the orderly opening of public data classification and the expansion of high-quality public training data resources.
In terms of technology development, the rules stipulate that AI should be developed using secure and proven tools, including chips, software, tools, computing power, and data resources.
Intellectual property rights – an often contentious issue – must be respected when using data for model development, and the consent of individuals must be obtained before incorporating personal information. There is also a focus on improving the quality, authenticity, accuracy, objectivity, and diversity of training data.
To ensure fairness and non-discrimination, developers are required to create algorithms that do not discriminate based on factors such as ethnicity, belief, country, region, gender, age, occupation, or health. Moreover, operators of generative AI must obtain licenses for their services under most circumstances, adding a layer of regulatory oversight.
China’s rules not only have implications for domestic AI operators but also serve as a benchmark for international discussions on AI governance and ethical practices.
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.