New IBM Telum Processors Show AI Investment ‘Down At The Silicon Level’

“We’re baking this into the silicon and we’re integrating it across the software stack,” Barry Baker, IBM’s vice president of product management for IBM Z & LinuxONE, tells CRN in an interview. “That is no insignificant investment and not one that goes away anytime soon.”

ARTICLE TITLE HERE

IBM has revealed details on its new Telum Processor, the first from the tech giant to contain on-chip acceleration for artificial intelligence inferencing while a transaction is taking place.

The chip is also another sign of IBM’s deep investment in AI capabilities “down at the silicon level” as well as “across the software stack,” Barry Baker, IBM’s vice president of product management for IBM Z & LinuxONE, told CRN in an interview.

“We’re baking this into the silicon and we’re integrating it across the software stack,” Baker said. “That is no insignificant investment and not one that goes away anytime soon.”

id
unit-1659132512259
type
Sponsored post

[RELATED: The 10 Coolest AI Chips Of 2021 (So Far)]

The Telum Processor, unveiled Monday during the annual Hot Chips conference — which is virtual this year due to the COVID-19 pandemic — was in development for three years and aims to provide business insights at scale across banking, finance, training, insurance and customer interactions, including preventing fraud in real time.

The chip aims to allow users to conduct high volume inference without off-platform AI products and services to avoid sacrificing performances, according to the company.

Robert Keblusek, chief innovation and technology officer at Downers Grove, Ill.-based IBM partner Sentinel Technologies — a member of CRN’s 2021 Managed Service Provider 500 — told CRN in an interview that, for now, his company’s main experience with AI is in cloud adoption and services.

But if Telum can deliver on IBM’s promises, the technology should prove “extremely helpful” for banks, which are a key customer for Sentinel, Keblusek said. If it delivers on its promises, Telum could help prevent “significant losses” and identify suspicious transactions much faster than the hours or days it can take today.

Banks “are constantly challenged with identifying fraud, and doing it at scale certainly seems like a challenge,” Keblusek said. “If IBM’s new chip can do this at scale and can detect more advanced fraud activities sooner or during the transaction, that seems like a game changer for that industry.”

AI is one of the areas receiving much of IBM’s research dollars, CEO Arvind Krishna told listeners on the company’s previous earnings call in July.

“We fundamentally believe that core to the competitiveness of every company going forward will be their ability to use AI to unlock real time value from the data wherever the data resides,” Krishna said. “Our efforts are focused on bringing our AI technologies to horizontal processes.”

Chip innovation has led to multiple vendors racing to make faster, more efficient AI chips. Just a handful of AI chip startups have raised nearly $1.5 billion alone in the past six or so months, and that doesn’t even cover all the AI hardware activity that has happened in the past few years.

Telum aims to allow clients to build and train AI models off platform and deploy and infer on Telum-enabled IBM systems for analysis. Telum-based systems are planned for the first half of 2022, according to the company.

The chip’s centralized design allows users to enhance rules-based fraud detection, accelerate credit approval processes and identify trades and transactions likely to fail, among other examples, according to the company.

Telum is the first IBM chip with technology made by the company’s Research AI Hardware Center. IBM partnered with Samsung for Telum’s technology development. Telum was developed in the 7nm EUV technology node.

“The chip contains 8 processor cores with a deep super-scalar out-of-order instruction pipeline, running with more than 5GHz clock frequency,” according to a company statement Monday. “The completely redesigned cache and chip-interconnection infrastructure provides 32MB cache per core, and can scale to 32 Telum chips. The dual-chip module design contains 22 billion transistors and 19 miles of wire on 17 metal layers.”

AI workloads have higher computational requirements and operate on large quantities of data. Infusing AI into application workflows demands a heterogeneous system of CPU and AI core integrated on the same chip for low-latency AI inference, according to IBM -- the kind of system achieved through Telum.

“We see Telum as the next major step on a path for our processor technology, like previously the inventions of the mainframe and servers,” according to a blog post by IBM’s AI Hardware Center. ”The challenges facing businesses around the world keep getting more complex, and Telum will help solve these problems for years to come.”

The blog post calls the proliferation of AI the kind of computing transition that comes once every 10 to 30 years.

“The last significant shift into a new form of compute was in the ’80s. With the new artificial intelligence workloads we’re facing, there’s an opportunity for great ideas from our team at IBM Research, working closely with IBM Systems, to create new AI compute capabilities and lay the foundation for the next generation of computing,” according to the blog post.

IBM launched the AI Hardware Center in 2019 and began building purpose-built AI hardware about six years ago. IBM has improved performance efficiency of its AI chips by 2.5 times a year since 2017, according to the company.

“Our goal is to continue improving AI hardware compute efficiency by 2.5 times every year for a decade, achieving 1,000 times better performance by 2029,” IBM said in a blog post.

“We see Telum as the next major step on a path for our processor technology, like previously the inventions of the mainframe and servers,” it continued. “The challenges facing businesses around the world keep getting more complex, and Telum will help solve these problems for years to come.”

Telum is “the next generation” of microprocessors serving critical workloads, Baker told CRN.

“More and more clients are looking to augment those workloads with AI,” Baker said. “The new Telum processor has all that you would expect from a modern processor that handles some of the most demanding transaction processing and database workloads in terms of hierarchical cache design, resiliency, security. But the real thing to hone in on here is the new integrated accelerator for AI. That‘s on the chip. We devoted a portion of the silicon real estate to accelerate AI primitives that our clients are asking for.”

Clients opening their on-premises systems to hybrid cloud leads to a need to reexamine the application delivering new business value “and do that quickly,” Baker said.

“As they open up these systems to hybrid cloud, they need to modernize their application data estate,” he said. “The things we‘re doing down at the silicon level accelerate and are aligned with that overarching strategy. There’s a lot of work we do on a regular basis with partners to help enable that.”

The investment in AI on chips will also help users outside of business applications, in areas including infrastructure, IT operations, data privacy, governance and security, Baker said. Enterprises of a variety of sizes, not just the largest enterprises and those with the largest budgets, should see benefits from these areas of innovation.

“The demand on the IT operations in how you‘re going to manage these systems, we will see that use case adopted across the market, across the different sides of our install base sooner, frankly, than the business analytics in the business AI case,” he said.