Camera Arrays

by David Danto

In 2015 I wrote a series of blogs titled “the past and future of the videoconference room” imagining future solutions. I detailed what I felt would reinvent room cameras, turning them into what I called “Camera Arrays.”

I picked that name because, to me, it represented the ability to stitch together tiny cameras – each covering a different piece of the image – to recreate the whole room for the other side. I speculated these cameras could get so small that they could be invisibly hidden inside and around the bezel of video displays hung in the room.

I still think that stitched cameras are where the future is headed, but the concept of camera arrays has emerged before we got to the stitching part – not as tiny invisible cameras as I had suggested, but instead, as small visible cameras. There are a bunch of those solutions now on the market.

The emergence of multiple-camera videoconference rooms makes sense. As the increase in remote participants due to the pandemic brought about the popularization of the term “meeting equity,” all of the manufacturers and service providers in the space scrambled to make the remote meeting experience more equitable. There are still many weak points to achieving that equity, from being able to interrupt the conversation when needed to collaborating on an in-room whiteboard. Those haven’t been well-solved. But what the industry players could easily do is add more camera angles to the mix.

If you think about it, every in-office meeting is already likely to have multiple cameras in the room. There is whatever the room’s videoconferencing solution has provided, and then there are the cameras on the computers and smart devices that people have brought into the room. Developers’ first thoughts were to connect all of these existing cameras, but that presented some problems with security (how do we let a central device access personal devices) and with the randomness of shots (people won’t place these cameras at optimal locations.) So that thought was shelved (for now.)

Instead, multiple solutions have been introduced that add cameras for the sole purpose of getting additional views into the meeting. If I’m looking at the screen in the room the main camera sees my face, but if I’m looking at the person across the table from me, the remote viewer can only see the side of my face. The somewhat flawed and self-serving concept is that because a local person can always see the front of my face (which isn’t really true) then we need to show the front of my face to the remote person as well.

Some manufacturers have accomplished this by simply adding more cameras on the sides of the room. One front camera and two side cameras gives the AI inside the cameras the ability to choose the best shot of whoever is speaking. Crestron’s OneBeyond and Huddly’s Crew follow this model. Other manufacturers have created camera devices to place in the middle of the table to capture these front-face views.

In either case, machine learning algorithms choose the best shot to show (and often show a strip of all the shots) to the remote participant. Taken together, these cameras are part of what I described in 2015 as an array, just as a microphone array in the same circumstance represents a number of those devices working in tandem.

Microsoft's Roundtable Camera in 2007

How valuable are these arrays? It depends. Pulling the history of the industry out of my back pocket for a moment, center of table (COT) cameras are not new. Microsoft introduced their “Roundtable” camera in 2007, then eventually giving it to Polycom to manufacture and sell. Other smaller companies had solutions that sat on the table as well – like the Silex COT unit that had a 360 degree camera on top and a video screen on each side. The user response to all of these at the time was, essentially, “meh.” Seeing a strip of everyone’s faces turned out to be a novelty that didn’t really show an explosion of adoption.

The concept wasn’t popularized again until a device called Meeting Owl came out in 2016. This COT now had the addition of a machine learning algorithm to show the last few people speaking, not all the people in a strip. There was some more user adoption than the last time, but not that much more.

It was (again) the pandemic that opened the door for this to be applied as a solution to the new problem of meeting equity. Combining the COT or side-wall views with a standard room view using AI created a new experience for the remote viewer. We now have various solutions with devices that can sit in the middle of the table or hang from the ceiling, and those that get slapped onto walls, all using invisible intelligence to create an array. These combine with a front of room camera to always select the best shot. Neat’s Center, Logitech’s Sight and HP/Poly’s E360 are all examples of this.

Center Of Table (COT) cameras from Neat (Center), Logitech (Sight) and HP/Poly (Studio E360)

These new camera arrays create an experience that is definitely better than past experiences for the remote participant. Saying it solves the meeting equity problem however is like saying a band-aid “solves” a fractured limb – I’m sure it doesn’t hurt, but it’s only a fraction of the answer.

All of today’s camera array solutions provide a better remote experience than we had before. That is progress, which we should always stop and cheer for. Still, I’m waiting for the camera-free (hidden in bezel) conference room that operates like the viewscreen on the Starship Enterprise. As I recall Captain Kirk never had a COT solution hanging on the bridge to chat with the Klingons.

[TalkingPointz Note: We will be testing the Neat Center on the TalkingPointz Test Bench and publishing a review in the near future.]