artificialintelligence

Auto Added by WPeMatico

This Device Can Charge Four Different Camera Batteries at the Same Time

Very rarely does a photographer only have one type of battery these days. From your camera’s battery, to rechargeable AAs, to drone batteries, it can be cumbersome to try and carry a charger for each when traveling. Bronine wants to change that with the Volkit.

The Volkit is a free voltage charging device with modular charging bricks that work with an assortment of brands and batteries. Bronine calls the Volkit an “AI charger” because it is able to analyze the battery that is attached to it and automatically adjust the correct voltage to charge it between 1 and 20 volts. Depending on the model, the Volkit can charge four of any combination of supported batteries at the same time while managing the voltage output to each individually. Bronine says that it has been working on the technology that allows the device to accurately determine the correct voltage automatically since 2017 and it is only now at the point where it is ready for consumers.

Calling it AI might be a bit of a stretch, but the technology does sound complicated.

The company supports a huge list of camera batteries from Canon, Nikon, Sony, Panasonic, and Fujifilm, to DJI drone batteries, GoPro batteries as far back as the Hero 5, and cylindrical lithium-ion batteries in various sizes (like rechargeable AAs). You can see the full list of supported camera batteries here.

The Volkit does require power, but Bronine says it can work using either a power bank, a wall outlet, or the cigarette lighter plug found in vehicles.

Bronine recognized that not all chargers are going to fit neatly against one another beside the central brick at the same time, and created extension cables that will allow you to connect different battery chargers without running into physical space issues.

The Volkit connects to what Bronine calls “Camera Kits” using a four-pin connection and is held in place with a battery. Once a battery is connected, it takes the Volkit a few seconds to analyze the correct voltage and begin charging. The top of the main unit has a screen that displays the current voltage output, the mAh of the connected device, and indicates the level of charge.

There are several different configurations that Bronine is offering, ranging from a two-port Volkit to a four-port and starting as low as $70.

The Bronine Volkit is currently slated to begin shipping from South Korea by March of 2021 and is fully backed on Kickstarter. Bear in mind that this is Bronine’s first project, and just like with any Kickstarter campaign, please remember that crowdfunding is not pre-ordering. Do your research and back accordingly.

New AI Software Automates ‘Ghost Mannequin’ Images, Promises to Streamline Clothing Photography

Software company autoRetouch has debuted a new software feature that promises to reduce hours of post-production time down to seconds. Automating what is called a “ghost mannequin” photo, the software quickly creates composite images for use in e-commerce.

A Ghost Mannequin image is a photo with a sort of “hollow man” look to it. Ghost Mannequins display the form of clothing on a body without the body actually needing to be shown wearing them. These images are usually created by taking a photo of the clothes being worn from the front and then compositing that image with a separate photo taken of the clothing inside-out, with the label facing the camera. The result is a floating garment that according to autoRetouch makes up about 25% of the fashion retail image market.

An industry first, autoRetouch’s patent-pending Ghost Mannequin AI component will allow you to upload two images of the same product (product and inlay shot), automate the removal of physical bodies while retaining the shape for a completely transparent figure, and merge the two shots seamlessly.

“The ghost mannequin feature offers automation of complex editing that was never before possible, across any platform, without any significant amount of manual effort,” the company says.

The entire process is browser-based and can be tested on the company’s website right now. If you are the kind of photographer that is responsible for the high-volume, burn-and-turn style production that many clothing brands currently demand, this software stands to save you a ton of time and money if it works like autoRetouch promises.

The ghost mannequin auto edit option is one of a few artificial intelligence-powered features autoRetouch offers. The company also advertises fast background removal and basic automatic skin retouching specifically aimed at the fashion e-commerce market.

The Ghost Mannequin feature is currently available as a demo on autoRetouch.com and will become generally available in early 2021. The company charges a quite affordable $0.10 per-image and doesn’t require a subscription.

DIY Camera Uses Machine Learning to Audibly Tell You What it Sees

Adafruit Industries has created a machine learning camera built with the Raspberry Pi that can identify objects extremely quickly and audibly tell you what it sees. The group has listed all the necessary parts you need to build the device at home.

The camera is based on Adafruit’s BrainCraft HAT add-on for the Raspberry Pi 4, and uses TensorFlow Lite object recognition software to be able to recognize what it is seeing. According to Adafruit’s website, it’s compatible with both the 8-megapixel Pi camera and the 12.3-megapixel interchangeable lens version of module.

While interesting on its own, DIY Photography makes a solid point by explaining a more practical use case for photographers:

You could connect a DSLR or mirrorless camera from its trigger port into the Pi’s GPIO pins, or even use a USB connection with something like gPhoto, to have it shoot a photo or start recording video when it detects a specific thing enter the frame.

A camera that is capable of recognizing what it is looking at could be used to only take a photo when a specific object, animal, or even a person comes into the frame. That would mean it could have security system or wildlife monitoring applications. Whenever you might wish your camera knew what it was looking at, this kind of technology would make that a reality.

You can find all the parts you will need to build your own version of this device on Adafruit’s website here. They also have published an easy machine learning guide for the Raspberry Pi as well as a guide on running TensorFlow Lite.

(via DPReview and DIY Photography)

Kodak Professional Select Uses AI to Auto-Cull Your Images

Kodak has launched a new application – powered by artificial intelligence – that promises to quickly cull your images for you based on a set of rules. Called Kodak Professional Select, the company promises fast, easy, and accurate results.

Kodak says that the service accepts hundreds to even thousands of images at a time and applies its “proprietary Ai” to evaluate those images on a set of criteria. The algorithm looks at technical attributes like color, focus, brightness, exposure, contrast, and sharpness while also considering aesthetic qualities like whether eyes are open or closed, if a subject is smiling, and if faces are centered to the frame.

The process for using the application is simple. After installing the software, upload images from an event that you want to have culled. After dropping them into the app, it transmits “appropriately sized images” to the cloud to be processed. The system then analyzes each image and automatically ranks, organizes, and selects what it believes to be the best from the entire event. You can then review its results by adjusting the score criteria, adding or removing your own selections, and separate or combine duplicates among other culling options. Finally, add keywords, assign star ratings, and adjust orientation before exporting selections to then import for editing.

The platform is designed to only provide organizational help and is not an editing tool.

The developers of the software said that they wanted to build a technology that would help the modern photographer, and to do so they asked professionals what they found they needed help with the most. They found that overwhelmingly, the number one pain point was image culling where photographers would spend hours looking at each image, one by one, to determine the good from the bad.

Seeing that as a bad use of a photographer’s time, the developers created the Kodak Professional Select software to make a better way. The company says that it applied proprietary imaging science algorithms coupled with photographer feedback to create an artificial intelligence designed to be a “virtual assistant.”

This particular branch of the Kodak name appears unrelated to the larger Eastman Kodak Company that is still producing film. According to the Professional Select website, this particular business was spun off from the main company in 2013.

You can try Kodak Professional Select for free for 30 days, after which the service is billed at $29.95 per month or $299.95 per year. To learn more, visit the company’s website here.

How to Get Professional Results with Photoshop’s AI Sky Replacement Tool

One of the major updates to the latest version of Photoshop is the addition of Sky Replacement: a tool that has the potential to save you a ton of time when editing your landscape images. But as Aaron Nace explains in this video, this AI-powered tool requires a bit of thought if you want to get professional results.

AI-powered photo editing tools are always sold as “one click” or “a few clicks” solutions that can transform a photo with next-to-no input from you. But even with the most advanced machine learning available, no automated tool can generate fool-proof results without a little bit of thought from the creator on the other end of that mouse.

Photoshop’s new Sky Replacement tool is a great example of this principle in action, as PHLEARN‘s Aaron Nace explains in the video above.

In the course of testing out AI Sky Replacement and showing you how it actually works inside Photoshop, Nace takes plenty of time to explain how to analyze the lighting in your original photo and create the most realistic composite possible. In his example image, the sun is clearly coming from the top left, so dropping in a sky where the light is clearly coming from the right would just look wrong:

“If I chose a new sky and I composited this together beautifully with perfect seams, but the sun was [in the wrong spot], it would not look right no matter how technically perfect you made this photo,” explains Nace. “The sun would be in the wrong place with the directionality of the shadows.”

This sets up the rest of the video, in which Nace explains how to use the new tool, refine the automatically generated edges, and pick a sky that has a chance of looking realistic.

He shows you how to adjust the various settings available like brightness, color temperature, and scale of the sky you just dropped in; how to alter the character of the scene re-lighting by changing the Blend Mode (Screen or Multiply) and playing with the Lighting Adjustment slider; and, finally, how to output you results.

For that last setting, the new sky will either be placed in “New Layers” or “Duplicate Layers”—but either way, you’ll get a new Sky Replacement layer group that will contain all your adjustments separated out so you can keep fiddling with the masks, adjust your settings, or add more adjustment layers after the fact.

As Nace demonstrates, even an AI tool requires a little bit of work to get the final product looking just right. But if you put in that work, you can get the same results you would have gotten from a manual sky replacement in less than half the time. That’s the real benefit of an AI-powered tool like this—not a “one-click” edit, but a much faster way to get professional results.

Check out the full tutorial/test/demo up top to see Photoshop’s new Sky Replacement in action for yourself. And if you want to watch more Photoshop tutorials like this one, you can find lots more on the PHLEARN YouTube channel.

Tech Startup Boom Raises $7M, Wants To Be Amazon for Photography

Boom, a Milan-headquartered tech startup, has raised $7 million in Series A funding based on its proprietary technology that is said to provide a way for companies to purchase “high-quality” images affordably, on a global scale.

Boom states that their goal isn’t to change photography (or filmmaking for that matter, as the system also supports booking of drone pilots, videographers, designers, and other creative disciplines), but change the way visual content is created using “intelligent technology.” Boom says it has created its own proprietary Artificial Intelligence and Machine Learning technology that allows Boom to supposedly trim down a photographer’s work to the “bare essentials” and handles everything else, from logistics to post-production.

The promise to its corporate clients involves its platform’s streamlined method of matching client photoshoot requests with the best photographers in the area combined with an automatic photo-editing system to allow faster access to images.

As far as how this benefits photographers, Boom hopes that the promise of more work opportunities with less stress is enough to entice high-quality talent.

Image via Boom.co

“We could see that countless internet giants were changing the way people shopped online, uploading billions of pictures on their websites and platforms every day, but these same brands had no access to a content provider that could keep up with their scaled-up, global, fast-paced environment. The whole system was expensive and obsolete,” Founder and CEO Federico Mattia Dolci said in a report on TechCrunch. “Our customers can place an order and expect a delivery 24h later, whether the photoshoots take place in Milan, New York, or Sydney, and whether the order calls for one photoshoot or a thousand! We guarantee speed, efficiency, and quality consistency every single time”.

Boom claims in excess of 250 major corporate clients including the likes of Deliveroo, Vacasa, Uber Eats, OYO, Lavanda, Casavo, Westwing, GetYourGuide, and more.

The $7 million in funding comes after a successful first-round seed of $600,000 the company obtained in June of 2018 and a second funding round in July of 2019 that totaled $3.4 million. In January of 2020, Boom consisted of 60 staff members that the company wished to expand to 120 by the end of the year. The company claims to represent over 35,000 photographers, operates in more than 80 countries, and has processed more than 3 million images to date.

Boom says that it will invest the latest round of funding into its “proprietary plug and play technology for managing the commercial photography production pipeline,” and will increase its presence to 180 countries including adding offices and studios in London and New York. The company is pitching itself as wanting to become “the Amazon for commercial photography.”

(Via TechCrunch)

Sneak Peek: Skylum Shows Off Water Reflections in AI Sky Replacement Tool

Skylum, the makers of Luminar 4 and the soon-to-be released Luminar AI, are taking a page out of the Adobe playbook. In a sneak peek video released earlier today, the company showed off the next iteration of its AI-powered Sky Replacement tool, which will be able to generate fake reflections from you “new” sky automagically.

Since its debut in 2019, AI Sky Replacement has become one of the more popular machine learning-powered features in Luminar. But while it does a decent job of cutting out the sky, dropping in something new, and adjusting the lighting in your image to match, there is one thing it notably does not do: it doesn’t add sky reflections to water.

That is, until now.

In so-called Sky AI 2.0, the editing software will be able to automatically generate a reflection of your new sky on any highly reflective surface (mostly water) that happens to be in your original photo. It’s not a huge change, but the capacity to do this adds that little bit of extra “pop” that can really sell the edit and make it look “real.”

And if it’s not looking quite right, you’ll be able to adjust the strength of the reflection or even add ripples to the surface of the water.

You can see the tool in action in the video above, or check out some before and after images below:

According to Skylum, the new-and-improved Sky AI 2.0 will be coming to Luminar AI via a free software update sometime in 2021, and based on their wording in the video, it will not be coming to Luminar 4. Whether this means that all of the AI features in Luminar 4 will be left alone or phased out, we don’t know, but we asked Skylum for clarification and will update this post if and when we hear back.

In the meantime, check out the upcoming feature in the video demo up top, and let us know what you think of all these AI-powered photo editing tools in the comments down below.

Microsoft’s New Image Captioning AI is More Accurate than Humans

AI researchers at Microsoft reached a major milestone this week: they managed to create a new “artificial intelligence system” that is, in many cases, actually better than a human at describing the contents of a photo. This could be a huge boon for blind and sight-impaired individuals who rely on screen readers and “alt text” when viewing images online.

While this might seem like one part of the prequel to Skynet, the development of a better image captioning AI has a lot of potential benefits, and warrants a bit of (cautious) celebration. As Microsoft explains on its blog: “[this] breakthrough in a benchmark challenge is a milestone in Microsoft’s push to make its products and services inclusive and accessible to all users.”

That’s because accurate automatic image captioning is used widely to create so-called “alt text” for images on the Internet—that’s the text that screen readers use to describe an image to sight-impaired individuals who rely on these accessibility options to make the most of their time online or when using certain apps on their smartphones.

Of course, Microsoft is careful to point out that the system “won’t return perfect results every time.” But as you can see from the examples in the video below, it’s far more accurate than the previous iteration. There’s a wide gulf between describing an image as “a close up of a cat” and describing that same image as “a gray cat with its eyes closed.”

“Ideally, everyone would include alt text for all images in documents, on the web, in social media – as this enables people who are blind to access the content and participate in the conversation. But, alas, people don’t,” explains Saqib Shaikh, a software engineering manager for Microsoft’s AI group. “So, there are several apps that use image captioning as way to fill in alt text when it’s missing.”

These apps can take advantage of the new system to generate accurate captions that “surpass human performance,” a claim that’s based on the nocaps image captioning benchmark that compares AI performance against the same data set captioned by humans.

Here’s another example of the improved AI in action, pulled from the video above:

Given the potential accessibility benefits of the improved captioning system, Microsoft has rushed this model into production and has already integrated it into Azure’s Cognitive Services, enabling interested developers to begin using the tech right away.

To learn more about this system and how it works, head over to the Microsoft blog or read up on the nitty gritty details here. Suffice it to say this isn’t exactly Skynet, but we can be pretty sure that future Terminators will be able to describe your photo library better than you can…

(via Engadget)

UK Passport Photo Checker Shows Bias Against Dark-Skinned Women

According to an investigation by the BBC, women with darker skin are more than twice as likely to fail the automated United Kingdom passport rules than fair-skinned men when submitted online through the nation’s automated government checker.

The United Kingdom offers an online service to submit your own images for use on passports, which would theoretically allow a person to get their passports more quickly. If you follow a set of guidelines, a person could also avoid paying to have a photo taken of them if they have the means to photograph themselves at home. Those guidelines include having a neutral expression, keeping a closed mouth, and looking directly at the camera. If a photo is submitted that does not meet all of the criteria, it is rejected as being “poor quality.”

According to the BBC, a student named Elaine Owusu found that the automatic online portal rejected her image for having an “open mouth,” which if you see the image yourself was clearly not the case. Owusu did manage to eventually get the photo approved after challenging the verdict, but she had to write a note arguing that her mouth was indeed closed.

Though she did win, she wasn’t happy about it. “I shouldn’t have to celebrate overriding a system that wasn’t built for me,” she told the BBC.

To determine if there was a systemic problem, the BBC fed more than 1,000 photographs of politicians (based on the Gender Shades study) into the system to see if there were any patterns. They found that dark-skinned men were told that the image was of poor quality 15% of the time when compared to 9% of the time for light-skinned men. For women, it was worse: 22% of the time dark-skinned women’s images were rejected while women with light skin were told their images were of poor quality 14% of the time.

Computers are only biased when the information they are given is biased. In 2019, The New York Times published a detailed article explaining the history of racial bias built into the basics of photography, and that issue continues to show itself in newer technologies like the UK’s automatic photo checker.

“The accuracy of face detection systems partly depends on the diversity of the data they were trained on,” David Leslie of the Alan Turing Institute wrote in response to the BBC investigation. “The labels we use to classify racial, ethnic and gender groups reflect cultural norms, and could lead to racism and prejudice being built into automated systems.”

When a system like this doesn’t work for everyone, the designer of the software would normally be asked to explain. Unfortunately, the government declined to name the external company that provided the automated checker.

As a result, a solution to the problem uncovered by this investigation – where the system in place fails for a disproportionate number of dark-skinned people – is not immediately apparent.

(Via BBC)