6.18.2015

IBM Scientists Present III-V Epitaxy and Integration to Go Below 14nm

IBM scientists in Zurich and Yorktown Heights, New York have unveiled a breakthrough approach in two publications for growing and integrating nano-sized III-V semiconductor devices on silicon. Both papers offer the microelectronics industry a possible answer to the long term challenge of creating a new powerful and energy efficient, yet smaller transistor to pave path for technology scaling for advanced CMOS nodes.

Researchers from the IBMs Materials Integration and Nanoscale Devices group demonstrated a novel, robust and yet versatile approach for integrating III-V compound semiconductor crystals on silicon wafers – a novel and an important step toward making chips smaller and more powerful at lower power density.
Scanning electron microscope images of single crystal structures fabricated
using template-assisted selective epitaxy. For better visibility, the silicon
is colored in green, and the compound semiconductor in red.
The technique developed can be used to combine III-V materials, including indium, gallium and arsenide (InGaAs), with silicon germanium technology to create CMOS chips. It is fully compatible with current high volume chip fabrication technology, making it economically viable for chip manufacturers.

The first paper was published last week in the journal Applied Physics Letters by lead author Heinz Schmid who describes the crystal growth starting from a small area and evolving into a much larger, defect-free crystal. In this so-called template-assisted selective epitaxy the oxide templates are defined and selectively filled via epitaxy to create arbitrary shaped III-V semiconductors such as nanowires, cross junctions, nanostructures containing constrictions and 3D stacked nanowires.

Using this small seed area epitaxy, today at the VLSI Symposium in Kyoto, IBM scientist Lukas Czornomaz is presenting a solution for large scale and controlled integration of high quality InGaAs on bulk Silcon (Si) which is based on standard CMOS process modules. Gate-first CMOS-compatible InGaAs FinFETs on Si with excellent performance have been demonstrated and integrated seamlessly in a CMOS manufacturing flow.
Cross-sectional TEM view along InP integrated on Si using the new technique.
High number of defects are observed at the seed region by
HR-STEM(b)/TEM(c) mostly consisting of stacking faults. Away from the seed,
a perfect lattice structure is observed with 8% mismatch to Si corresponding to fully relaxed InP (d,e).
Integrating high quality III-V materials on silicon is critical for getting the benefit of higher electron mobility to build transistors with improved power and performance for technology scaling at 7 nm and beyond. Unfortunately, growing III-V materials on 300 mm silicon substrates isn’t easy and often produces wafers with so many defects that they are rendered useless.

The described novel epitaxy and integration process allows the materials to be grown precisely with a low number of defects on the wafer position where they are needed and therefore represent a significant, economical and manufacturable breakthrough towards the introduction of high-mobility channels into advanced CMOS nodes.

The new technique may also impact photonics on silicon, with active photonic components integrated seamlessly with electronics for greater functionality. 

Both papers are part of IBM’s $3 billion, five year investment to push the limits of silicon technology to 7 nanometers and below. More specifically, IBM scientists are motivated to integrate III-V materials on silicon for faster and more powerful devices. IBM is betting that future chips made of these materials will create more energy efficient and powerful cloud data centers and consumer devices.

6.11.2015

Using analytics to get smart about water

Profile of a scientist: Ernesto Arandia

Ernesto Arandia, a data scientist with the IBM Smarter Water team in Dublin, Ireland, grew up half a planet away, in land-locked Bolivia. It was this life high in the Andes, where water availability is an issue for all citizens that spurred his interest in water management, and the pursuit of a PhD in Hydraulic Science at the University of Cincinnati.

He parlayed his research and creation of tools to manage water usage via smart meters and demand forecasting into a job with IBM Research-Ireland in 2013. Today, he is working on two research applications, a Water Network Analytics Toolkit and iWIDGET. Both of which deal with water demand and management by reducing cost and waste.

What interested you in the IBM Research-Ireland Smarter Cities Technology Centre (SCTC)?

EA: My interests are in modeling urban water and environmental systems, stochastic processes, and data mining, as well as the design and optimization of hydraulic and civil infrastructure. With my civil and environmental engineering background, the IBM lab in Dublin seemed the perfect place to pursue my research on sustainability projects in the area of water management.

What did you focus on for your PhD at the University of Cincinnati?
EA: My research in Cincinnati involved experimental and computational research for accurate estimation of real-time water usage, measured by smart meters. It was part of an EPA-funded project for the development of a contaminant warning system after 9/11, which required modelling contaminant transport and water quality in water distribution systems. The main goal was to develop a fine resolution model capable of predicting how water is used in an urban water network across space and time.
The experimental part involved setting up fixed monitoring stations to collect smart meter data from thousands of utility customers. Cincinnati Water Works was one of the first utilities in the US to fully adopt smart meters in their metro area – which created a unique research opportunity for me.
My first challenge was to find areas where the stations could monitor the consumption of a sample of users as large and as diverse as possible. This required a Geographic Information Systems (GIS) analysis of topography, demographics and socio-economics. The data collection stage took about two years and provided a dataset rich enough to understand, characterize, and represent water usage.
One of the main outcomes of the project was a model for synthetic generation of demands that vary periodically and resemble real smart meter demands. Another outcome was a model to forecast water demand aggregated at useful scales, which can be used to estimate and forecast water consumption in an urban water network in real time.
What was the first project you worked on after joining the Smarter Water team?

Pictured Smarter Water Team L-R:
Joe Sawaya-Naoum, Ernesto Arandia,
Bradley Eck and Fabian Wirth
EA: Our Smarter Water team entered a challenge posed by the Water Distribution Systems Analysis Conference called the Battle of Background Leakage Assessment for Water Networks (BBLAWN). BBLAWN asked for a methodology for recommending changes to the design and operation of a water distribution system to minimize total expenditure, while meeting service requirements.

Our entry paper, called A Simulation-Optimization Approach for Reducing Background Leakage in Water Systems, detailed a hydraulic model that simulates background leakage, custom implementations of heuristic algorithms, and optimization solvers. The solution methodology we proposed decomposed the overall problem into smaller more tractable problems aimed at a single type of decision. We found that examining each problem individually had the advantage of simplifying implementation of software and interpretation of results, and allowing parts of the problem to be examined in parallel.

The annual expenditure of the simulated town (called “C-Town”) was €3.9 million, but our models were able to reduce this to €1.5 million. This took into account both capital expenditure (rehab and infrastructure such as replacing pumps, pipes, valves, tanks) and operational expenditure (electricity costs as well as environmental penalties for wastage). Our main recommendation to "C-Town" was to invest about €600,000 to correct background leakage issues. This greatly reduced the amount of water loss in the network, as well as costs such as environmental penalties. This solution illustrates what a city could achieve in terms of managing leakage and overall operations.

Our work paid off as we won the competition for having the best combination of solution methodology, and economy of proposed infrastructural and management changes. 

What are you working on now?

EA: I am currently working on a Water Networks Analytics Toolkit. It provides decision support for a range of operational and planning problems on water networks and building blocks for new solutions. It's a web-based application that knows the optimal time to replace mechanical equipment such as pumps and meters; calculates water bills under various pricing schemes; and makes adaptive water pricing recommendations.

I am also involved with a project called iWIDGET, which uses analytics and smart meters to gather real-time data on water and related energy usage. The aim of the project is to improve the management of urban water demand by reducing waste, providing utilities with deeper insights into end-user demand, and ultimately reducing customer water and energy costs. 

iWIDGET provides information to householders to help them reduce their water usage, and also provides utility companies better visibility of their customers’ usage. This improved visibility lets the utility perform more accurate demand forecasting, and alert customers of suspected leaks.

What is the future of smarter water management?

EA: Water is a constrained resource, so utilities have the dual task of reducing losses through leaks in the network as well as reducing overall consumption. The best way to tackle these issues is by integrating government regulation, customer behavior and the utility’s own approach to improving infrastructure and measurement of usage.

Water utilities can learn from the energy and utilities sector, as it has a more highly developed model for pricing and demand management. The energy sector uses, for instance, an effective pricing mechanism based on “time of use” which means consumers use less power at peak times to reduce the chance of an outage.

Energy suppliers and distributors are also more proactively optimising their networks, using demand forecasting to balance energy supply (including renewables) with consumer demand. The water utilities should adapt and add to these components from the energy model instead of trying to reinvent the wheel.

What is your advice to future water scientists?

EA: I see great opportunities to do joint research with scientists working in the energy domain. I think this collaboration can lead to new and more effective methodologies for water and energy management. It can also yield new combined strategies to improve water and energy efficiency. The data to support such cross-discipline research is increasingly becoming available and it is well known that systems cannot be fully described in total isolation. For instance, planning the optimal rehabilitation of a water network cannot be approached effectively with disregard for optimal electricity consumption.

I feel privileged to work in an environment that offers an abundance of human and material resources of quality to address meaningful projects and to conduct exciting and innovative research. I think that it is important for young scientists to take the knowledge and solutions they developed and bring them back to their origins, in my case Latin America.

For instance, in Bolivia, we do not have a continuous supply of water, which impacts our poorer citizens the most. I think that the work I have done can help make a difference. Technology that has been developed in advanced economies can be brought to developing countries and make an impact on their economies and quality of life.

6.04.2015

Meet IBM researcher Shiri Kremer-Davidson

A social media maven whose dashboard can score your social engagement

Shiri Kremer-Davidson
Editor’s note: Shiri Kremer-Davidson is a specialist in graph-based social analytics at IBM Research - Haifa

What do you do at IBM? 

I’m part of a cross organizational team that built the Personal Social Dashboard. It’s a pilot web application that provides employees with an engagement score along with insights on their individual social activity within the organization. It helps people understand how their activity is being consumed and how they are perceived by their colleagues – something we generally are unaware of.


I lead the engagement analytics that produce the resulting scores and evidence shown in the application. With new studies showing how important it is to raise the level of employee engagement inside the enterprise, it’s crucial to have a way to let employees and managers know how they’re doing.

Why is this project interesting for you? 

For me, this is a fascinating journey into learning how IBMers socially engage and how they use our internal social network, IBM Connections. I was also keen on identifying success stories and extracting personalized recommendations on how people can grow their voice inside the enterprise. If you’re motivated to increase your digital presence, the dashboard gives you feedback on how people are reacting to you and what you’re sharing. It helps you see whether you’re on the right track.

Personally, I wasn’t publicly active enough on social media before this project. This tool is also helping me increase my social presence within IBM. 

What insights did you discover as part of creating this tool? 

We had a client engagement this year with a large European bank where we deployed the dashboard. We wanted to help in areas such as: increase adoption of Connections; provide input for organizational changes; get an indication of the collaboration across and between different divisions.

Using the dashboard’s analytics along with an enterprise graph, we found: how and what different divisions are interacting and the strength of those connections; how information flows within the organization; who are the primary influencers and brokers within and between the organizations, and more. 

How did this dashboard get started? 

It all started with a project called Breadcrumbs, led by Analytics Strategist Marie Wallace. We built an enterprise graph over all the social activities done on Connections inside IBM. It included historic and current data for hundreds of thousands of employees. We also built a Big Data infrastructure that would allow us to perform a wide range of analytics over it.

What were you looking for, initially? 

First, we started looking at how social activity patterns over time can serve as a clue for who is staying and who might be leaving the company. Next, we began working with the CIO to develop a solution that could empower employees with knowledge on their social eminence, and guide them on how to improve it. In parallel, we wanted to help management understand how their employees engage socially – but without exploiting the privacy of employees.

Since I had done research on social reputation, the idea of incorporating my experience into such a social engagement dashboard was intriguing. It was fun working together with teams across IBM.

How does your dashboard measure social eminence? 

The dashboard gives you four scores: Activity, Reaction, Eminence, and Network. All scores are relative inside the organization.
  • Activity is a score that goes up or down depending on how active you are.
  • Reaction on your content takes into account likes, reads, shares, comments, and so on.
  • Eminence reflects the amount of people trying to interact with you and your content – following you, sharing files with you, etc.
  • Network scoring indicates how many followers and colleagues you have.
Capture of my page in the Personal Social Dashboard
The dashboard scores are periodically refreshed and we improve the algorithms regularly. Ultimately, we’d like to add personalized recommendations to guide employees on how to raise their eminence and how to make their voice more prominent inside the organization. 

What are your future plans? 

We are getting a lot of positive feedback for the pilot. Employees feel that it is helping them understand their social presence and increase their voice in the enterprise. Our pilot currently has more than 9,000 users and is rapidly growing.

In the future we plan to integrate additional data sets into the tool and add features our user community finds important such as personalized recommendations.

Find more information on IBM JStart and on Shiri’s web page.

6.02.2015

Containing Ebola With Mathematical Modeling

When the Ebola virus looked as if it might run rampant across Guinea, Liberia and Sierra Leone, many groups at IBM sprang into action. One was a team of Almaden-based researchers working primarily on a pet food safety project with Mars Incorporated. The team organized a weekly Wednesday morning conference call in which some 40 organizations addressed ways to contain the spread of the disease until a cure could be found.

In this episode of Inside IBM Research, Kun Hu, a researcher at IBM Almaden Research Center, talks about STEM, the Spatio-Temporal Epidemiological Modeler, a mathematical tool that lets researchers simulate the evolution of highly contagious diseases such as Ebola with the goal of understanding them — and, ideally, preventing their spread altogether.

Inside IBM Research is recorded and edited by Brook Finkelstein, IBM Research communications.

5.29.2015

Replacing Solder with All-Copper Interconnects

Today we can cram more than one billion transistors onto a single microprocessor die thanks to 50 years of miniaturization of integrated circuits. However, the scaling is becoming a headache for engineers as transistor dimensions of only a few atoms are being approached. 

One alternative concept is 3D integration, the stacking of individual integrated circuits on top of each other. This improves integration density, but also energy efficiency and compute performance thanks to proximity and improved wiring of electronic circuits, similarly to the neural network inside our brain.
 

But challenges arise as you stack the chips, namely the demand of increased current densities and communication bandwidth to the chip stack. Currently, microprocessors are connected to printed-circuit-boards (PCB) by several 10,000s of solder balls, which are arranged in a regular grid at the bottom side of the chip. The pain is that as much as 80% of the electrical interconnects are occupied for provisioning of up to 300 Amps of current and only 20% are left for the main purpose of signaling.
 

The solder ball interconnect technology was invented by IBM engineers in the 1960s resulting in the still widely used flip chip technology, also known as the controlled collapse chip connection (C4). A chip with solder balls is placed onto a PCB and both components are transferred into an oven. The solder melts above 230°C and instantly wets the pads of the PCB. During cool-down, the solder solidifies to form the electrical contact.

Cross-section of All-Copper Interconnects formed by low temperature sintering of copper nano-particles
A challenge is that voids form in the solder interconnects at high current densities and system failures occur. The voids are a consequence of the passing “electron wind”, causing material transport in the direction of the current flow. An alternative interconnect material to solder is copper, a material already used for electrical wires for most chips and PCBs. Copper is about ten-fold more resistant against the “electron wind”, resulting in what is termed electromigration.

IBM researchers invented together with colleagues from Intrinsiq Materials, SINTEF and the Technical University of Chemnitz a process where solder is replaced with copper to create All-Copper Interconnects. The physical limitation of the high melting temperature of copper (1085°C) was overcome. These interconnects can be formed via the self-assembly of copper nanoparticles between a copper pillar and a pad, followed by an annealing step at temperatures as low as 150°C. The resulting interconnect allows to increase the current density and to reduce the interconnect pitch. IBM scientists from Zurich are presenting their latest developments using copper today in San Diego at the IEEE Electronic Components and Technology Conference (ECTC).
 

To perform All-Copper Interconnects, the scientists are working with a paste containing copper micro and nano-particles, which is smeared out to yield a uniform Cu-paste film with so-called “doctor blading”. Cu-paste is transferred to copper pillars of a microprocessor chip simply by dipping. The chip is aligned and placed onto pads of a PCB. The final joint is achieved by annealing at a temperature well below the melting temperature of copper. This is an example of the beauty of nanotechnology; nano-particles have a very large surface compared to their volume and hence diffusion of copper atoms is promoted to form metallic contacts. Formic acid is applied during the annealing step to reduce the copper oxide from the nano-particles.

Formation of All-Copper Interconnects: a)  Copper film formation by doctor blading, b) dipping of
microprocessor into paste film, c) alignment and d) placement of chip to substrate, including annealing
The novel All-Copper Interconnects have the potential of overcoming the current density and interconnect pitch limitations of current solder joints and thereby support the continuation of the system integration, in particular 3D stacking for the next decades to come. 


While multiple hours of lab work remains, Dr. Thomas Brunschwiler at IBM Research is optimistic. He adds, “If we can demonstrate the All-Copper Interconnects in high-volume production, it will have a significant impact on devices ranging from laptops to data centers and our vision of a supercomputer the size of a shoebox will become reality.”
 

This research is part of HyperConnect, a European funded activity with 10 partners from industry, research institutes and universities. 

 

5.14.2015

Demand forecasting by means of data-driven technique

Editor’s note: this article is by Mathieu Sinn, Research Manager and Francesco Dinuzzo, Research Scientist, members of the Exploratory Predictive Analytics, IBM Research-Ireland

Mathieu Sinn
We all plan. Predicting what we'll need next week or next month. Businesses do the same. They forecast what kind, and how many of their goods and services will be needed by customers in the future. What if a machine could learn how, and forecast these demands? Maybe you, personally, wouldn't want one to. But at our lab in Dublin, we're developing machine learning algorithms for businesses, from retailers to energy and utility companies, to automate their demand forecasting.

Business planners rely on demand forecasting to find patterns from a multitude of data sources from internal and external factors. They use algorithmic models and predictive analytics to create a “demand forecast” which attempts to predict the amount of goods or services people, and businesses, will demand in the future. Retailers base in-stock management decisions like ordering and storage, as well as supply chain management, on demand forecasts. Energy utility companies use forecasting for scheduling operations, investment planning and price bidding. Now that we can integrate data from these disparate sources, our predictive analytics team is applying machine learning to improve demand forecasting accuracy and granularity.

Our team builds machines that learn from their mistakes and improve forecasting accuracy over time. Every time the machines apply the algorithm to compute a forecast, they are adjusting it to make it – and their predictions – better. Machine learning can automatically scale and adjust these predictive tasks. Instead of a domain expert building forecasting models manually, machine learning methods can combine large quantities of historical data and knowledge from the domain experts to learn relevant models automatically and with better predictive accuracy. Our analytics and systems are providing reliable forecasts for real-time operations that can seamlessly export the insights from data-at-rest into data-in-motion to support real-time operations. We see a great opportunity to use machine learning in demand forecasting and applying it across industries.


Francesco Dinuzzo
Through our predictive analytics research with energy demand forecasting and simulation we are seeing an increasing need for such data-driven and large scale forecasting solutions across industrial sectors which can also employ these same techniques.

Demand forecasting for smarter energy grids

We have been collaborating with VELCO (Vermont Electric Power Company) since last year to build the Vermont Weather Analytics Center. Its goal is to provide smarter grid resiliency and management, including renewable energy, using our energy demand forecasting systems and analytics capabilities alongside high resolution weather forecasting from the IBM Deep Thunder technology.

“Renewable energy production has a strong dependency on weather; likewise, energy demand also depends on weather. Therefore, high resolution, high accuracy forecasting will be a key enabler of the coming transition to clean energy,” said Chandu Visweswariah, IBM Fellow and Director of IBM’s Smarter Energy Research Institute.

“The project is built around a Vermont-specific version of Deep Thunder predictive weather model, coupled with a renewable energy forecasting model and an energy demand model. These models apply analytics to in-state and regional weather data to produce accurate weather, renewable energy and demand forecasts.”

We also recently published a paper with researchers from EDF (Electricite de France), one of the world’s largest energy providers, Massive-Scale Simulation of Electrical Load in Smart Grids using Generalized Additive "GAM" Models. In this paper, we describe how to simulate the energy load on a smart grid, including additional supply from renewable energy sources and demand from electric cars, to generate a more accurate energy demand forecast. Accurate load forecasts improve the efficiency of supply as they help utilities to reduce operating reserves, act more efficiently in the electricity markets, and provide more effective demand-response measures.

We concluded that by using real energy demand data, the right class of models and IBM tools like InfoSphere Streams, businesses can extract insightful models from data at rest, and deploy them directly to support real-time operations. And IBM SPSS Modeller can provide not only modelling, but visualization to help utilities quickly capture the changing trends in supply and demand, and rapidly deploy these insights to optimize the real time operations of the grid and production.


From these two energy and utilities projects, our team believes that we can replicate similar use cases across many different industries and businesses. We will share more of our findings this July at the 32nd International Conference on Machine Learning (ICML) in Lille, France.

Read more about our work with machine learning and demand forecasting on our research areas and applications page.

5.13.2015

Silicon photonics: The future of high-speed data

Editor's note: This article is by Dr. Will Green, Silicon Integrated Nanophotonics  department manager, IBM Research

IBM’s research in brain-inspired computing, quantum computing, and silicon photonics is preparing to take computing in entirely new directions. The neuromorphic chip is getting smarter, the quantum bits are being scaled out, and in the near future, my team’s CMOS Integrated Nano-Photonics Technology will help ease data traffic jams in all sorts of computing and communications systems – pushing cloud computing and Big Data analytics to achieve their full potential.

For the first time, we have designed and tested a fully integrated, wavelength-multiplexed silicon photonics chip capable of optically transmitting and receiving information at data rates up to 25 Gb/s per channel. This will soon make it possible to manufacture optical transceivers capable of transmitting 100 gigabits of data per second.

Silicon photonics technology gives computational systems the ability to use pulses of light to move data at high speeds over optical fibers, instead of using conventional electrical signals over copper wires. Optical interconnects, based on vertical-cavity surface-emitting laser (VCSEL) technology and multi-mode fiber, are already being used in systems today. But their transmission range is limited to a relatively short distance of about 150 meters. Today, large data centers continue to scale in size to support exponentially growing traffic from social media, video streaming, cloud storage, sensor data, and much more. The longest optical links in such systems can be more than a kilometer in length. As a result, new optical interconnect solutions that can meet these requirements at low cost are needed to keep up with future system growth.

How light boosts bandwidth

Our silicon photonics technology is designed to transmit optical signals via single-mode optical fibers, which can support links many tens of kilometers long. Moreover, we have built in the capability to use multiple colors of light, all multiplexed to travel within the same optical fiber, to boost the total data capacity carried. The recently demonstrated silicon photonic chip can combine four wavelengths (all within the telecommunications infrared spectrum), allowing us to transmit four times as much data per fiber. The chip demonstrates transmission and reception of high-speed data at 25 Gb/s over each of these four channels, so within a fully multiplexed design, we’re able to provide 100 Gb/s aggregate bandwidth.

A cassette carrying several hundred chips
intended for 100 Gb/s transceivers,
diced from wafers fabricated with
IBM CMOS Integrated
Nano-Photonics Technology
In addition to the expanded range and bandwidth per fiber, our new photonics technology holds several other advantages over what is available today. Perhaps most importantly, the technology’s manufacturing makes use of conventional silicon microelectronics foundry processes, meaning volume production at low cost. In addition, the entire chip design flow, including simulation, layout, and verification, is enabled by a hardware-verified process design kit, using industry-standard tools. As a result, a high-speed interconnect circuit designer does not require an in-depth knowledge of photonics to build advanced chips with this technology. They can simply follow the standard practices already in place in the CMOS industry.

This unified design environment is mirrored by our integrated platform, which allows us to fabricate both the electronic and photonic circuit components on a single silicon chip. Rather than breaking up the electrical and optical functions, we integrated the optical components side by side with sub-100nm CMOS electrical devices. This results in a smaller number of components required to build a transceiver module, as well as a simplified testing and assembly protocol, factors which further contribute to substantial cost reductions.

Performance of the fully integrated, wavelength-multiplexed silicon photonics
technology demonstrator chip. The eye diagrams illustrate four separate
transmitter channels (right) exchanging high-speed data with four receiver
channels (left), each running at a rate of 25 Gb/s.
While the primary applications for silicon photonics lie within the data center market, driven by Big Data and cloud applications, this technology is also poised to have a large impact within mobile computing. There’s a need for low-cost optical transceivers to shuttle large volumes of data between wireless cellular antennae and their base stations, often located many kilometers away. As the data bandwidth available to mobile users increases generation after generation, the number of individual cells required to support the traffic does the same. Our technology can deliver faster data transfer in higher volume and across larger areas, in order to support the inevitable growth while controlling costs.

There has been significant discussion around the 50th anniversary of Moore’s Law and about whether it has reached its end. Silicon photonics fits into that "next switch" conversation. On the processor side, there’s still a fairly consistent trajectory in terms of CMOS technology scaling – down to 10nm, 7nm, and even smaller. The role of our CMOS Integrated Nano-Photonics technology will be to reduce communication bottlenecks inside of systems, and to allow expansion of their capacity for processing huge volumes of data in real time.

Kilometer-scale data centers are emerging. Big Data and the Internet of Things are connecting people and information in ways that were unimaginable only a few years ago. IBM’s silicon photonics technology will augment that growth on the ground, into the Cloud, and beyond.