Post-Processing

Auto Added by WPeMatico

Researchers Release Improved 3D Models Made From Tourists’ Photos

Artificial Intelligence and machine learning advancements have allowed researchers to build detailed 3D models of real-world locations by using the reference data of thousands of tourists’ photos. The finished models have cleanly removed unwanted objects and even normalized lighting conditions.

The project and associated paper are titled Neural Radiance Fields for Unconstrained Photo Collections and was originally published in August of 2020. The project was recently updated with even more examples of its application, a deep-dive video explanation of how the program works, and published findings that take the idea of converting 2D to 3D a step further.

To recap, researchers used a photo tourism data set of thousands of images to produce highly-detailed models of iconic locations.

“You can see that we are able to produce high-quality renderings of novel views of these scenes using only unstructured image collections as input,” the researchers say.

“Getting good results from uncontrolled internet photos can a challenging task because these images have likely been taken at different times, ” the researchers explain. “So the weather might change or the sun might move. They can have different types of post-processing applied to them. Also, people generally don’t take photos of landmarks in isolation: there might be people posing for the camera, or pedestrians or cars moving through the scene.”

The project is a learning-based method for synthesizing views of complex scenes using only in-the-wild photographs. The researchers — Ricardo Martin-Brualla, Noha Radwan, Mehdi S. M. Sajjadi, Jonathan T. Barron, Alexey Dosovitskiy, Daniel Duckworth — built on the Neural Radiance Fields (NeRF), which uses different perspective data to model the density and color of a scene as a function of 3D coordinates.

The original NeRF program was unable to model images based on real-world situations or uncontrolled image sets, as it was only designed originally for controlled settings. These researchers decided to tackle that particular weakness and enabled accurate reconstructions from completely unconstrained image collections taken from the internet.

“We apply our system, dubbed NeRF-W, to internet photo collections of famous landmarks, and demonstrate temporally consistent novel view renderings that are significantly closer to photorealism than the prior state of the art,” the team writes.

“NeRF-W captures lighting and photometric post-processing in a low-dimensional latent embedding space. Interpolating between two embeddings smoothly captures variation in appearance without affecting 3D geometry,” the team explains. “NeRF-W disentangles lighting from the underlying 3D scene geometry. The latter remains consistent even as the former changes.”

The model of the Brandenburg Gate above is extremely high resolution. While available to preview above, it can also be seen in both Full-HD as well as QHD, or 1440p.

The advancements on this from the original have resulted in much better, less noise-filled 3D models that are far superior to the original Neural Renderings in the Wild from last year. Below are a couple of still capture examples, but the benefits of this latest advancement are clearer when seen in motion via the video above.

The video explanation of how this program works is fascinating to anyone working in the advancement of artificial intelligence. What these researchers have done is extremely impressive, and it will be interesting to see what possible applications of this technology come in the future. You can learn more about the project and technology here.

How to Colorize Photos in Photoshop with Just a Few Clicks Using AI

Adobe recently gave Photoshop the ability to instantly colorize photos using Adobe Sensei AI technology. Here’s a new 1.5-minute video tutorial by Adobe showing how you can now breathe color into a black-and-white photo with just a few clicks.

After loading up your photo, go to Filter->Neural Fliters to open up the new Neural Filters panel.

In the beta filters section (the Erlenmeyer flask icon), you’ll see a Colorize option. Click the toggle to turn it on.

Voila! Photoshop will use its image recognition technology to colorize the elements of your photos in the way it thinks best.

Before applying the Colorize Neural Filter in Photoshop.
After applying the Colorize Neural Filter in Photoshop.
Before applying the Colorize Neural Filter in Photoshop.
After applying the Colorize Neural Filter in Photoshop.

If certain areas of the photo are slightly off, you can make custom adjustments in the Colorize panel as well. The result is added on top of your photo layer as a Smarter Filter on a Smart Object.

To get started with the Colorize Neural Filter, make sure you’ve updated to the latest version of Photoshop CC. You’ll also need around 130MB of disk space to install the Colorize filter itself.

Understanding the Differences Between Clarity, Texture, and Dehaze

Youtuber Kevin Raposo believes that many photographers use the Clarity, Texture, and Dehaze sliders in Lightroom without fully understanding what each is doing. To help, he’s uploaded this quick 4-minute video that teaches you the differences between each, and when best to use them.

Raposo explains that the Texture Tool adds or reduces contrast in an image based on Frequency. He explains that frequency refers to the consistency of colors, brightness, and shadows in a section of an image. Low-frequency sections of an image are like skies, where there is a relative consistency of those elements. High-frequency sections of an image are the opposite and have less consistency.

The Texture slider specifically targets the high-frequency areas and adds sharpness and definition when moved up and blurs them when moved down.

The Clarity tool affects frequency by targeting the midtones. Raposo explains the relationship between these two sliders by showing each slider’s effect on an image taken out a plane window.

The Clarity slider, when dropped down, blurs the plane wing while the Texture slider does not. This is because the plane wing is mid to low frequency, which the Texture slider largely ignores but the Clarity slider directly affects.

Dehaze is meant to be used to restore color and contrast to a washed-out image, but can also make colors unrealistic if used with too heavy of a hand.

Raposo explains the ways he uses each of these three sliders for different purposes in the video above, so make sure to watch it in full to see how he recommends utilizing each of these three tools in your own work.

For more from Kevin Raposo you can subscribe to his YouTube Channel.

The Process of Colorizing and Animating an 80-Year-Old Photograph

Photo colorizer and restorer Hint of Time has shared an 8-minute video where he shows his process for not only colorizing an 80-year-old black and white photo, but also brings it to life with subtle animation.

While usually stopping at colorizations on his YouTube Channel, Hint of Time decided to go one step further with this latest work on a photo of a New Jersey farmworker taken by Mario Wolcott in 1941.

“I decided to add a little twist and not only colorize but also give this photograph a 3D effect,” he writes. “It took me about 5 hours to colorize and animate this old black and white photograph in Photoshop and After Effects.”

Hint of Time has uploaded multiple videos that show his process for colonization, and his results are rather impressive:

There are some prominent historians that argue the colorization of history should be avoided, but Hint of Time says that he does his research to make sure that the colors he adds are as true to life as possible.

“When I do colorization or restoration work on an old black and white photo I always do research before anything else and try to find the colors of historically significant elements like uniforms, a person’s features, buildings, or even popular color palettes of the decade,” he writes. “The colors are then adapted to the scene in the photograph and added by hand with the help of Adobe Photoshop. Color accuracy is more important than perfectly drawing over the edges. The objective is to have an end result with colors as historically accurate as possible. The research, colorization, and restoration are processes that take a long time but the dramatic comparison of the black-and-white photos and the colorized version of the picture always makes it worth the hard work.”

For more from Hint of Time, subscribe to his YouTube Channel or follow him on Instagram.

(via r/ArtisanVideos)

Making Relative Adjustments to Sets of Photos in Lightroom

Sometimes it may be necessary to make a set of images darker or lighter, more or less contrasty, or otherwise change their appearance but retain their relative appearance. Lightroom has the ability to make relative adjustments on a selected set of images to facilitate that.

Look at the following images, for instance. They convey the same look and feel although they required slightly or significantly different adjustments to look that way. The Basic adjustments panels for each are below the images.

The relative adjustments are available in the Grid or Survey modes. I used the Survey mode by pressing the letter N after selecting all three images. Now, only they are visible and a good deal larger than the thumbnails I would have gotten in the grid view. Here is the Survey Mode screen with the images selected.

Note that the Auto Sync is turned on, see lower right of the image. That will carry the adjustments made to one image to the other selected ones. The relative adjustment controls are under the tab Quick Develop. Instead of the sliders of the develop module, here we have single and double arrows for increasing or decreasing the adjustments. Whatever adjustments we make here will be adjusted relative to their current settings for all the selected images.

The single right and left arrows take small steps and the double arrows make bigger jumps. The amount of these vary. In the case of exposure adjustment, the single arrows increase or decrease the exposure in increments of a third of a stop, 0.33; and, the double arrows make full stop adjustments.

For all the other adjustment sliders, this works in units of 5 and 20 when adjusted with single and double arrows respectively. And, all the adjustments are added to or subtracted from the existing level of each develop-module slider.

Making the adjustments involves clicking on the arrows, up or down until the desired look is achieved without worrying about remembering the increment of adjustment done on one image and trying to replicate the same on the other images. The changes are easily visible on the images as you click on the arrows. Here are the same set of photographs after the adjustments I made to all with the Basic panel captures for each below them.

Now, if you have multiple photographs from the same scene, of the same subject that you adjusted to present the same look and feel, you can alter them simultaneously for a new and better look. Also, keep in mind that selecting the images in the develop module and moving the sliders there does not accomplish this as those are absolute adjustments and will result in the sliders being at the same spots and the images will not look the same if they had different adjustments applied to them.

Here is a comparison of before and after Survey Mode screens.

Before
After

Ending 2020, Welcome 2021

This article will be my last one of 2020 and it has many “before-after” comparisons. We are all looking forward to a much improved “after” state in 2021 as we leave 2020 behind.

A very happy holiday season and a happy new year.

May 2021 bring health and happiness to everyone.

And, let us remember those who suffered or were taken by the virus.


About the author: A. Cemal Ekin is a photographer based in Warwick, Rhode Island who has been shooting for roughly 60 years. The opinions expressed in this article are solely those of the author. Ekin retired as a professor of marketing emeritus from Providence College in 2012 after 36 years of service there. Visit his website here. This article was also published here.

How this three-principles editing template can help your creative vision

When I work with an image, I want to create something pleasing to the eyes, a piece of art with a wow-factor. I desire to produce a scene that takes the viewer on a journey from foreground to background. When it comes to editing, it really helps to have a guiding template. It helps the […]

The post How this three-principles editing template can help your creative vision appeared first on DIY Photography.

Photoshop vs Luminar 4: Which Performs Sky Replacements Better?

Adobe recently added the ability to do easy and accurate sky replacements in Photoshop, very similar to a feature that was the main reason to pick up Skylum Luminar. So, which is better? In this 18-minute video, YouTuber Serge Ramelli decides to find out.

Ramelli tests multiple photos on both platforms to see which of the two programs worked better with the same base image and sky replacement. Ramelli has shared one pair of images with PetaPixel. The first image was created using Adobe Photoshop 2021:

And this image was made using Skylum Luminar 4:

When viewed as a whole, both photos at first look very good. However, neither are perfect when we zoom in to 100% on some of the finer details:

Left: Luminar; Right: Photoshop

Skylum Luminar seems to do a better job with keeping a more distinct edge to the tower but struggled when it came to retaining that edge as the tower becomes thinner near the top. Photoshop seems to do a better job at keeping a more consistent edge on the building regardless of where on the tower it was performing a mask, but at the same time seems to want to put the clouds over the tower rather than behind it, making it appear like the peak is inexplicably high in the clouds or surrounded by fog.

The main takeaway here is that no matter which you use, you probably will have to spend a bit of time refining the final image to make sure it looks realistic.

Ramelli also shows that neither of the two programs replaces how the sky should look when it is reflected on a body of water, like a lake. The realism of the sky replacement is lost even when the color balance of the image is shifted to match whatever new sky is placed. As a result, if you do have a large body of water in your photo and you want to make a more realistic replacement, Serge recommends a more manual process that is still relatively fast. It involves mirroring the sky, setting the mode to Multiply, and masking it over the scene. You can get that detailed explanation starting at the 15-minute mark of the video.

“Luminar is a great option if you don’t want to pay for Photoshop,” Ramelli says. “Photoshop is great, I think it has more features than Luminar and I think it is a little better.”

For more from Ramelli you can subscribe to his YouTube Channel.


Image credits: Serge Ramelli and used with permission.

How to Get Professional Results with Photoshop’s AI Sky Replacement Tool

One of the major updates to the latest version of Photoshop is the addition of Sky Replacement: a tool that has the potential to save you a ton of time when editing your landscape images. But as Aaron Nace explains in this video, this AI-powered tool requires a bit of thought if you want to get professional results.

AI-powered photo editing tools are always sold as “one click” or “a few clicks” solutions that can transform a photo with next-to-no input from you. But even with the most advanced machine learning available, no automated tool can generate fool-proof results without a little bit of thought from the creator on the other end of that mouse.

Photoshop’s new Sky Replacement tool is a great example of this principle in action, as PHLEARN‘s Aaron Nace explains in the video above.

In the course of testing out AI Sky Replacement and showing you how it actually works inside Photoshop, Nace takes plenty of time to explain how to analyze the lighting in your original photo and create the most realistic composite possible. In his example image, the sun is clearly coming from the top left, so dropping in a sky where the light is clearly coming from the right would just look wrong:

“If I chose a new sky and I composited this together beautifully with perfect seams, but the sun was [in the wrong spot], it would not look right no matter how technically perfect you made this photo,” explains Nace. “The sun would be in the wrong place with the directionality of the shadows.”

This sets up the rest of the video, in which Nace explains how to use the new tool, refine the automatically generated edges, and pick a sky that has a chance of looking realistic.

He shows you how to adjust the various settings available like brightness, color temperature, and scale of the sky you just dropped in; how to alter the character of the scene re-lighting by changing the Blend Mode (Screen or Multiply) and playing with the Lighting Adjustment slider; and, finally, how to output you results.

For that last setting, the new sky will either be placed in “New Layers” or “Duplicate Layers”—but either way, you’ll get a new Sky Replacement layer group that will contain all your adjustments separated out so you can keep fiddling with the masks, adjust your settings, or add more adjustment layers after the fact.

As Nace demonstrates, even an AI tool requires a little bit of work to get the final product looking just right. But if you put in that work, you can get the same results you would have gotten from a manual sky replacement in less than half the time. That’s the real benefit of an AI-powered tool like this—not a “one-click” edit, but a much faster way to get professional results.

Check out the full tutorial/test/demo up top to see Photoshop’s new Sky Replacement in action for yourself. And if you want to watch more Photoshop tutorials like this one, you can find lots more on the PHLEARN YouTube channel.

New Video Shows Off the Powerful Landscape Editing Tools in Luminar AI

In their latest demo of the upcoming Luminar AI photo editor, Skylum takes aim at landscape photographers and shows them just how powerful Luminar’s machine learning-based tools really are. From Enhance AI for relighting and color grading, to Atmosphere AI for adding fog and other effects, there are some impressive automatic editing tools coming to your laptop very soon…

The short demo covers a lot of ground in just 63 seconds. Not only does it re-highlight the latest updates coming to Sky Replacement AI, Skylum shows off several new landscape editing tool that will be built into the AI editing program. That includes:

  • Composition AI – Automatically crop and straighten images with one click.
  • Enhance AI – Automatically detects uneven color and lighting and balances them for you.
  • Sky AI – Swap out the sky in your image and tweak the horizon blending, position, and scene relighting to taste.
  • Atmosphere AI – Add realistic atmospheric effects like fog.
  • Golden Hour Image Relighting – Tucked alongside options for Dehaze and Foliage Enhancer, this feature allows you to intelligently relight the scene to give it a golden hue.

With Adobe making a not-so-subtle grab for Luminar AI’s prospective audience with the release of Neural Filters in Photoshop, Skylum obviously wants to make it clear that they’re not going anywhere. Adobe may have more money and man power to throw at the problem, but Luminar’s creators are trying to make the most of their head start in the AI photo editing space.

(via 43 Rumors)