Technology

Auto Added by WPeMatico

Results From This Blind Smartphone Camera Test Will Surprise You

Marques Brownlee, also known as MKBHD, hosted his third annual, 17-minute blind smartphone camera test featuring 20 new smartphones. The phones were grouped into a bracket that hid the phone names while the public voted for their favorites.

First of all, it’s important to note right from the start that this “test” is the furthest thing from scientific in nature.

“This isn’t a scientific test at all,” Brownlee says. “In fact, it’s kind of the opposite of a scientific test.”

The point of this playoff-style bracket isn’t to objectively claim one camera is better than another, but rather serves as a good case study for what people think makes a photo “good.” By the end, and after over 10 million total votes were cast, Brownlee was able to point out some interesting conclusions from the information he gathered.

The concept of the test is simple: Brownlee put together a seeded bracket (seeds were determined by his team, and in the end seeds honestly did not seem to matter very much) and associated each camera in the test with a letter. Here is the bracket as those who voted on the test saw it before the first images were posted:

Those who would be voting had no idea which phones were up against each other round by round, which was the point. Brownlee wanted to see how people would vote based purely on the images that were taken and nothing else. In each round, all smartphones would be placed in the exact same position and photograph the exact same subject under the exact same circumstances. In this way, the only differences in how a photo looked were based purely on how each smartphone is programmed to capture an image. Brownlee went so far as to not even tell the camera where to focus, leaving that up to the smartphone as well.

All the images that Brownlee took may look simple and unchallenging, but that was the point. The idea was to create scenes that could very easily happen in everyday life, but also integrate challenging aspects to each image that may not be immediately noticeable. In one image there might be a wide mix of shadows and highlights which would test each camera’s dynamic range, while in another there might be a lot of textures and competing colors that would show how each sensor adapted to the differences.

The photos were posted to both Twitter and Instagram stories as each allow for polls. The first round used this photo:

The second round used this image:

The third round used this photo:

And the final round used this photo:

After the polls for all rounds closed, Brownlee revealed which cameras people voted on:

At the beginning of his video, Brownlee makes two important notes: First, each year he has done this test, no camera brand has ever repeated a win. Second, the iPhone has never once made it out of the first round.

As you can see in the finished bracket with the new winner, both those notes remained true after this year’s test:

So, naturally, the next question would be to ask, “Is the Asus Zenfone 7 Pro the best smartphone camera?” The answer to that is probably, “well, not necessarily.” As he stated in the beginning, this wasn’t a scientific test and the results weren’t intended to necessarily crown one device the best camera.

What it did do was indicate what the general population of voters finds most attractive about a given image. Brownlee makes several interesting notes about the images he and his team chose to take, and why he thinks the iPhone in particular continues to struggle in this competition.

“White balance has been a major, key new factor from our understanding from the smartphone bracket this year,” he says. “And I would go so far as to say that it looks like this has been the reason that the iPhone has lost in the first round every single time.”

White balance appears to be a huge factor in what makes an image appealing, and Apple’s iPhones seem to lean more towards blue than other cameras on the market. What this does is makes warmer tones – such as his skin tone, as Brownlee points out – appear oddly hued.

Brownlee says that what he and his team deduced is that when content and brightness are the same, people will choose a photo with slightly better color saturation. Now there is of course a limit to this, as oversaturation can make images look terrible, but a correct white balance also has a direct impact on how the eye perceives the saturation of certain tones. The cooler the white balance, the more saturated blues will look. The warmer the white balance, the more saturated warmer colors like orange and red will appear.

So while the iPhone 12 Pro Max photo has a lot of detail and sharpness (the iPhone was phone “M”), it was the coolest color balance of the group and the boosted exposure blew out the sky in the background. This combination led it to once again lose to other smartphones that handled color and dynamic range better.

So why do some phone consistently have such a cool tint to the white balance? While his guess is as good as anyone’s since he doesn’t know for sure, the answer he comes up with might date back to the origins of photography, and how even film was designed.

“On photos of people with fairer skin tones, which is most people, it doesn’t affect the skin tone look quite as much,” Brownlee speculates. “You can get away with it. And also, blue skies will look more blue than they would if you were biasing warm.”

Even more interesting (or frustrating, depending on how you look at it), Brownlee discovered that it’s not just what your smartphone captures, but how much the hosting service you choose for that photo matters as well. In the final photo showdown, the image of the two pumpkins looked different between Twitter and Instagram, leading to a pretty notable disparity in the voting between the two platforms.

Watching Brownlee’s full conclusion and discussion of this year’s results is definitely worth the time. While crowning a single smartphone a winner is interesting, even more so is the philosophic discussion at the end. How the general public sees images and what elements of a photo they tend to value is helpful information for any photographer.

For more from Marques Brownlee make sure to subscribe to his YouTube Channel.

Raw Isn’t Magic. With the right tools Log does it too.

Raw can be a brilliant tool, I use it a lot. High quality raw is my preferred way of shooting. But it isn’t magic, it’s just a different type of recording codec.
 
All too often – and I’m as guilty as anyone – people talk about raw as “raw sensor data” a term that implies that raw really is something very different to a normal recording. In reality it’s really not that different. When shooting raw all that happens is that the video frames from the sensor are recorded before they are converted to a colour image. A raw frame is still a picture, it’s just that it’s a bitmap image made up of brightness values, each pixel represented by a single brightness code value rather than a colour image where each location in the image is represented by 3 values one for each of Red, Green and Blue or Luma, Cb and Cr.

As that raw frame is still nothing more than a normal bitmap all the cameras settings such as white balance, ISO etc are in fact backed in. Each pixel only has one single value and that value will have been determined by the way the camera is setup. Nothing you do in post production can change what was actually recorded.

Modern cameras when shooting log or raw also record metadata that describes how the camera was set when the image was captured. 

WB, ISO etc is baked in to the raw recorded file. I know lots of people will be disappointed to hear this or simply refuse to believe this but thats the truth about a raw bitmap image with a single code value for each pixel and that value is determined by the camera settings.

This can be adjusted later in post production, but the adjustment range is not unlimited and it is not the same as making an adjustment in the camera. Plus there can be consequences to the image quality if you make large adjustments. 

Log can be also adjusted extensively in post too. For decades features shot on film were scanned using 10 bit Cineon log (which is the curve S-Log3 is based on) and 10 bit log used for post production until 12 bit and then 16 bit linear intermediates came along like OpenEXR. So this should tell you that actually log can be graded very well and extensively.

But then many people will tell you that you can’t grade log as well as raw. Often they will give photographers as an example where there is a huge difference between what you can do with a raw stills imahe and the normal images. But we also have to remember this is comparing what you can do with a highly compressed 8 bit jpeg file and an often uncompressed 12 or 14 bit raw file. It’s not a fair comparison, of course you would expect the 14 bit file to be better.

The other argument often given is that it’s very hard to change the white balance of log in post, it doesn’t look right or it falls apart. Often these issues are nothing to do with the log recording but more to do with the tools being used.

When you work with raw in your editing or grading software you will almost always be using a dedicated raw tool or raw plugin designed for the flavour of raw you are using. It shouldn’t come as a surprise that to get the best from log you should be using dedicated log tools. In the example below you can see how Sony’s Catalyst Browse can perfectly correctly change the white balance and exposure of S-log material with simple sliders.
 
Screenshot-2020-11-27-at-17.14.07 Raw Isn't Magic. With the right tools Log does it too.
On the left is the original S-Log3 clip with the wrong white balance (3200K) and on the left is the corrected image. The only corrections made are via the Temperature slider and exposure slider.
 
Applying the normal linear or power law (709 is power law) corrections found in most edit software to Log won’t have the desired effect and basic edit software rarely has proper log controls. You need to use a proper grading package like Resolve and it’s log controls. Better still some form of colour managed workflow like ACES where your specific type of log is converted on the fly to a special digital intermediate. There is no transcoding, you just tell ACES what the footage was was shot on and magic happens under the hood. 
 
Screenshot-2020-11-27-at-17.29.27 Raw Isn't Magic. With the right tools Log does it too.
The same S-Log3 clip as in the above example, this time in DaVinci Resolve using ACES. The only corrections being made are via the Temp slider for the white balance change and the Log-Offset wheel which in ACES provides a precise exposure adjustment.
 
When people say you can’t push log, more often than not it isn’t a matter of can’t, its a case of can – but you need to use the right tools.
 
After-correction-450x253 Raw Isn't Magic. With the right tools Log does it too.
This is what log shot with completely the wrong white balance and slightly over exposed looks like after using nothing but the WB and ISO sliders in Catalyst Browse. I don’t believe raw would have looked any different.
 
Less compression or a greater bit depth are where the biggest differences between a log or raw recording come from, not so much from whether the data is log or raw.  Don’t forget raw is often recorded using log, which kind of makes the “you can’t grade log” argument a bit daft.
 
Camera manufactures and raw recorder manufacturers are perfectly happy to allow everyone to believe raw is magic and worse still, let people believe that ANY type of raw must be better than all other types of recordings. Read though any camera forum and you will see plenty of examples of “it’s raw so it must be better” without any comprehension of what raw is and how in reality it’s the way the raw is compressed and the bit depth that really matters.

If we take ProRes Raw as an example: For a 4K 24/25fps file the bit rate is around 900Mb/s. For a ProRes HQ file the bit rate is around 800Mb/s. So the file size difference between the two is not at all big.
 
But the ProRes Raw file only has to store around 1/3 as many data points as the component ProRes file. As a result, even though the ProRes Raw file often has a higher bit depth, which in itself usually means better a better quality recording, it is also much, much less compressed and as a result will have fewer artefacts. 

It’s the reduced compression and deeper bit depth possible with raw that can lead to higher quality recordings and as a result may bring some grading advantages compared to a normal ProRes file, but the best bit is there is no significant file size penalty. So given that you won’t need more storage, which should you use? The higher bit depth less compressed file or the more compressed file? But, not all raw files are the same. Some cameras feature use highly compressed 10 bit raw, which frankly won’t be any better than most other 10 bit recordings.

If you could record uncompressed 12 bit RGB or component log from these cameras that would likely be as good as any raw recordings. But the files would be huge. It’s not that raw is magic, it’s just that raw is generally much less compressed and (may, depending on the camera) have a greater bit depth.

The post Raw Isn’t Magic. With the right tools Log does it too. first appeared on XDCAM-USER.COM.

Manufacturer Says ‘Time of Flight’ Sensors to Become Standard in Smartphones

German semiconductor manufacturer Infineon Technologies believes that Time of Flight, or ToF, sensors will become a standard in high-end smartphones. The company says the technology will overcome low-light focusing issues that currently plague the devices.

In an interview with SemiconductorForYou, the Senior Vice President of Sensors at Infineon Philipp von Schierstaedt says that the company sees development towards 3D mapping via ToF image sensors as the soon-to-be standard in high-end smartphone cameras.

Time-of-flight (ToF) technology projects a single modulated, infrared light source onto an object, person, or scene of interest. The reflected light is captured by the ToF imager, which measures amplitude and phase difference per pixel. The result is a highly-reliable image of the distance plus a gray-scale picture of the entire scene.

“The deployment of 3D sensing technologies… solve(s) the challenges that new applications set to the rear camera and that traditional 2D technologies fail to accomplish: a real 3D depth map,” von Schierstaedt says.

The company believes that certain challenges with using a ToF sensor more widely are about to be resolved with its latest technology. Many ToF sensors simply lack enough resolution when compared with the tradeoffs of needing extended distances or dramatically greater power consumption.

“Such shortcomings are now being addressed by the latest 3D ToF imager generations in the REAL3 portfolio of Infineon,” von Schierstaedt says. “Therefore we’re convinced that we will see our new 3D sensor either enabling new classes of applications or taking existing applications to a new level of quality and user experience.”

Sony’s latest Time of Flight sensor

Infineon will be up against Sony when it comes to ToF development. Recently, Sony debuted a brand new ToF sensor designed for industrial use, but the company clearly recognizes the benefits of ToF in consumer devices, since the Sony Xperia 1 Mark II has a ToF sensor already. What’s more, Infineon’s stance on the future of ToF sensors does lead credence to the possibility of widespread deployment.

It’s still to be seen if the benefits of ToF are necessary for larger sensors with inherently better low light image capture, but the increased research and development around the technology can only be a good thing for the market. For now, the best application of this 40-year-old autofocusing technology is its continued use in smartphones, which see considerably more benefit to implementation due to the far more compact pixels of the smaller sensors and the overall weaker low light performance compared to large sensor cameras.

(via Image Sensors World)

Forensic Experts Used Photos and Videos From Social Media to Reconstruct Beirut Explosion

On August 4, an explostion rocked the city of Beirut, killing over 200 and injuring more than 6,500. In order to reconstruct exactly what happened, forensics researchers were able to piece together the event using photos and videos uploaded to social media.

Forensic Architecture was invited by the Egyptian online journal Mada Masr to review the available open-source information that included photos, videos, and documents to help provide an accurate 3D timeline and model of the event. That model, which includes the warehouse, clouds of smoke, initial blast sphere, and parts of the city where the reference images and videos captured, has been made available on GitHub.

Starting with the very first image uploaded to social media on August 4, 2020 at 5:54 PM, the researchers began their timeline.

They used visual markers to determine the location of the photo and calculated the camera’s cone of vision. They did this to determine the earliest sign of a smoke plume.

“Smoke plumes are continuously transforming and have a unique shape at every moment,” the forensic researchers said. “We modeled the plume at this crucial stage to help synchronize other videos without a time stamp.”

Once the forensics team had this information, they were able to search for videos uploaded at similar times and analyze how the smoke plume evolved. They analyzed the color of the smoke as it darkened over a period of 10 minutes, which indicated that the materials that were burning had changed.

By 6:07 PM, the researchers were able to determine that a second fire had started that created an additional plume. Seconds later, the explosion obscures the entire area of the warehouses. However, because of the clean, 180-degree shape of the explosive blast, they determined that this particular explosion had a single detonation point, and were able to map that point by centering over the middle of that explosion.

They then used the two plumes, the original smoke plume and the explosion plume, to synchronize the remaining footage.

Images and video taken from a variety of locations from a variety of angles allowed the researchers to accurately reconstruct the event once they were properly inserted into the timeline. It’s a wealth of information that prior to the preponderance of social media would not have been available, and experts would likely have spent a lot longer trying to piece together the incident with far less information, likely resulting in a much less accurate framework of the event.

The researchers also were able to pull together images from a combination of sources along with earlier reports of poorly stored, flammable ammonium nitrate and other explosive materials such as tires and fireworks that were housed in the warehouse. Using these images, the team was able to accurately map the interior of the warehouse and isolate why they believe the explosion happened.

While it’s easy to blame social media for much of society’s woes, this research goes to show that crowd-sourced public information can be extremely valuable. Just one photo uploaded to Twitter provided a monumentally helpful piece of information that allowed forensics experts to build a timeline and model of the event. This information was critical to understanding why the explosion happened and will hopefully be used to prevent such a tragedy from occurring again.

(via Gizmodo)

Xiaomi Promises Better Quality Photos Via Its Retractable Smartphone Lens

As part of a larger announcement at its Mi Developer Conference in Beijing, Xiaomi announced a retractable wide-aperture lens technology that it says will improve its cameras’ light-gathering ability by 300%.

Xiaomi says that it drew inspiration for its latest self-developed Retractable Wide-Aperture Lens Technology from traditional camera designs. The structure is designed to completely lodge itself within a smartphone and extend out as needed. The retractable telescopic nature of the camera will have a larger aperture which would increase the light-gathering input ability by the aforementioned 300%. The company says that the technology will also showcase better performance for portrait and night photography compared to currently-available smartphone camera systems.

In addition to physically bringing in more light, the retractable lens also integrates a new Xiaomi image stabilization technology that the company says offers a larger anti-shake angle, which it says makes images more stable and therefore increases the sharpness of images by up to 20%.

The company stated that it plans to implement the new technology into its forthcoming smartphone designs with the goal of bringing “professional photography to hundreds of millions of smartphone users around the world.”

Telescoping lens designs as part of fixed-lens cameras are probably most well known for their use in point-and-shoots, a segment of the camera industry that has seen a huge decline in its relevance since the onset of smartphone photography. Differently than Xiaomi’s design though, these cameras typically used the telescoping design as a way to increase zoom, not affect the camera’s light-gathering ability. Xiaomi is taking that telescoping idea from point-and-shoots to tackle the problem of increasingly bulky camera module designs in a way that would reduce that bulk outside of when photos were being taken, yet still increase the image-capturing capability of the camera.

Since most consumers would likely prefer their smartphones to remain compact and pocketable, different design ideas that attempt to tackle this problem have risen over the years. In addition to Xiaomi’s design presented here, Vivo recently showed a prototype design that would allow the entire camera to be removed and replaced.

How well Xiaomi’s retractable lens design works remains to be seen, but the promised increases in light gathering ability should allow cameras to make better use of the small sensors most are currently equipped with.

(via Xiaomi via Slashgear)


Image credits: Featured image via Xiaomi, point-and-shoot photo by Eric Muhr on Unsplash.

Kodak Professional Select Uses AI to Auto-Cull Your Images

Kodak has launched a new application – powered by artificial intelligence – that promises to quickly cull your images for you based on a set of rules. Called Kodak Professional Select, the company promises fast, easy, and accurate results.

Kodak says that the service accepts hundreds to even thousands of images at a time and applies its “proprietary Ai” to evaluate those images on a set of criteria. The algorithm looks at technical attributes like color, focus, brightness, exposure, contrast, and sharpness while also considering aesthetic qualities like whether eyes are open or closed, if a subject is smiling, and if faces are centered to the frame.

The process for using the application is simple. After installing the software, upload images from an event that you want to have culled. After dropping them into the app, it transmits “appropriately sized images” to the cloud to be processed. The system then analyzes each image and automatically ranks, organizes, and selects what it believes to be the best from the entire event. You can then review its results by adjusting the score criteria, adding or removing your own selections, and separate or combine duplicates among other culling options. Finally, add keywords, assign star ratings, and adjust orientation before exporting selections to then import for editing.

The platform is designed to only provide organizational help and is not an editing tool.

The developers of the software said that they wanted to build a technology that would help the modern photographer, and to do so they asked professionals what they found they needed help with the most. They found that overwhelmingly, the number one pain point was image culling where photographers would spend hours looking at each image, one by one, to determine the good from the bad.

Seeing that as a bad use of a photographer’s time, the developers created the Kodak Professional Select software to make a better way. The company says that it applied proprietary imaging science algorithms coupled with photographer feedback to create an artificial intelligence designed to be a “virtual assistant.”

This particular branch of the Kodak name appears unrelated to the larger Eastman Kodak Company that is still producing film. According to the Professional Select website, this particular business was spun off from the main company in 2013.

You can try Kodak Professional Select for free for 30 days, after which the service is billed at $29.95 per month or $299.95 per year. To learn more, visit the company’s website here.

This is How a Match-Needle Exposure Meter From a 1971 Canon Works

Technology Connections, a YouTube channel that covers a wide array of interesting technology stories, has shared this 28-minute video that explores how the Canon F1 from 1971 works, with special detail focused on the camera’s light meter.

In addition to learning specific details about the Canon F1, the host actually goes into a lot of detail about both the history of terms like ISO and how f-stops are calculated, and how shutter speed and aperture work together to create an exposure. If you’re new to photography and want a fairly fast yet thorough explanation of how all the settings on modern cameras work, this video is a surprisingly good place to start.

The main topic of the video, though, is a technology called a match-needle exposure meter. It is the only part of the Canon F1 that requires a battery to operate and works differently than modern exposure meters. Match-needle exposure meters, also called selenium meters, are based on the photoelectric properties of the element selenium. According to a detailed breakdown of the technology here, selenium meters are an instrument “which is connected to the anode and cathode of a selenium photocell that produces more or less electric power when exposed to more or less light.”

It’s a fascinating old camera technology that isn’t used much today. They did not age well, as selenium cells tended to generate less current as they were used over the years and were exposed to the elements like light, heat, and moisture. As a result, many old selenium meters are not accurate today or are completely dead. However, it’s possible if a selenium meter was never used, it could still function perfectly fine despite its age.

For more information on match-needle exposure meters, you should read this detailed article here, and for more deep-dives into technology, you can subscribe to Technology Connections on YouTube.

Scientists Photographed Our ‘Galactic Bulge’ Using a Dark Energy Camera

In an effort to research how the center of the Milky Way Galaxy formed what is known as a “galactic bulge,” Scientists used a Dark Energy Camera to survey a portion of the sky and capture a photo of billions of stars.

NASA’s Hubblesite describes our galaxy as “shaped like two fried eggs glued back-to-back.” This depiction makes clear the central bulge of stars that sits in the middle of a sprawling disk of stars that we usually see in two-dimensional drawings. You can get a better idea of how that looks thanks to a rendering from the ESA below:

This makeup is thought to be a common feature among myriad spiral galaxies like the Milky Way, and scientists desired to study how the bulge was formed. Were the stars within the bulge born early in our galaxy’s history, 10 to 12 billion years ago, or did the bulge build up over time through multiple episodes of star formation?

“Many other spiral galaxies look like the Milky Way and have similar bulges, so if we can understand how the Milky Way formed its bulge then we’ll have a good idea for how the other galaxies did too,” said co-principal investigator Christian Johnson of the Space Telescope Science Institute in Baltimore, Maryland.

The team surveyed a portion of our sky covering more than 200 square degrees – an area approximately equivalent to 1,000 full Moons – using the Dark Energy Camera (DECam) on the Victor M. Blanco 4-meter Telescope at the Cerro Tololo Inter-American Observatory in Chile, a Program of NSF’s NOIRLab.

This image shows a wide-field view of the center of the Milky Way with a pull-out image taken by the DECam.

The scientific sensor array on the DECam is made up of 62 separate 2048×4096 pixel backside-illuminated CCD sensors, totaling 520 megapixels. An additional 12 2048×2048 pixel CCD sensors (50 megapixels) are used to guide the telescope, monitor focus, and help with alignment.

This wide-field camera is capable of capturing 3 square degrees of sky in a single exposure and allowed the team to collect more than 450,000 individual photographs. From that data the team was able to determine the chemical compositions for millions of stars. The image below contains billions of stars:

You can view a pannable and zoomable version of this image here. It uses the same interface as the giant 2.5 gigapixel image of the Orion Constellation taken by Matt Harbison.

For this particular study, scientists looked at a subsample of 70,000 stars from the above image. It had been previously believed that the stars in the bulge were born in two separate “waves” early in the history of the galaxy, but thanks to data gleaned from the study, now scientists think that a vast majority were formed at about the same time nearly 10 billion years ago.

According to Nasa, the researchers are looking into the possibility of measuring stellar distances to make a more accurate 3D map of the bulge. They also plan to seek correlations between their metallicity measurements and stellar orbits. That investigation could locate “flocks” of stars with similar orbits, which could be the remains of disrupted dwarf galaxies or identify signs of accretion like stars orbiting opposite the galaxy’s rotation.

(Via Hubblesite and SyFy)

Tech Startup Boom Raises $7M, Wants To Be Amazon for Photography

Boom, a Milan-headquartered tech startup, has raised $7 million in Series A funding based on its proprietary technology that is said to provide a way for companies to purchase “high-quality” images affordably, on a global scale.

Boom states that their goal isn’t to change photography (or filmmaking for that matter, as the system also supports booking of drone pilots, videographers, designers, and other creative disciplines), but change the way visual content is created using “intelligent technology.” Boom says it has created its own proprietary Artificial Intelligence and Machine Learning technology that allows Boom to supposedly trim down a photographer’s work to the “bare essentials” and handles everything else, from logistics to post-production.

The promise to its corporate clients involves its platform’s streamlined method of matching client photoshoot requests with the best photographers in the area combined with an automatic photo-editing system to allow faster access to images.

As far as how this benefits photographers, Boom hopes that the promise of more work opportunities with less stress is enough to entice high-quality talent.

Image via Boom.co

“We could see that countless internet giants were changing the way people shopped online, uploading billions of pictures on their websites and platforms every day, but these same brands had no access to a content provider that could keep up with their scaled-up, global, fast-paced environment. The whole system was expensive and obsolete,” Founder and CEO Federico Mattia Dolci said in a report on TechCrunch. “Our customers can place an order and expect a delivery 24h later, whether the photoshoots take place in Milan, New York, or Sydney, and whether the order calls for one photoshoot or a thousand! We guarantee speed, efficiency, and quality consistency every single time”.

Boom claims in excess of 250 major corporate clients including the likes of Deliveroo, Vacasa, Uber Eats, OYO, Lavanda, Casavo, Westwing, GetYourGuide, and more.

The $7 million in funding comes after a successful first-round seed of $600,000 the company obtained in June of 2018 and a second funding round in July of 2019 that totaled $3.4 million. In January of 2020, Boom consisted of 60 staff members that the company wished to expand to 120 by the end of the year. The company claims to represent over 35,000 photographers, operates in more than 80 countries, and has processed more than 3 million images to date.

Boom says that it will invest the latest round of funding into its “proprietary plug and play technology for managing the commercial photography production pipeline,” and will increase its presence to 180 countries including adding offices and studios in London and New York. The company is pitching itself as wanting to become “the Amazon for commercial photography.”

(Via TechCrunch)