Hope everyone had a great Fourth of July yesterday. We did the usual grilling with friends and family before the local town fireworks. Though DC is right next door, I have never been to the Nation’s Capitol’s Fireworks because if you are a local, who lives more than 5 miles from the Mall, you know it’s just not worth the hassle with all the other small displays in neighboring towns.
As for what’s going on with R&D editions, let’s just say its getting fun. Since my last post, I have meet with Chris Bathgate and Stephen Lambert. For Chris, we had a brief meeting when I drove up to his studio to drop off 4 new copper plates, cut down to their proper size, ready for him to engrave new images in for finalized proofing.
He has been busy the past two weeks, so hopefully sometime in the coming week I will hear from him and we will schedule a time to get him in the shop and proofing those plates.
As for Stephen Lambert, we met this past Tuesday night and we able to discuss and play with the first algorithm he finished for the series based on computer vision research. This initial algorithm is an edge detection system, to help define objects in the image. It is currently very rough, but also what is meant to be one of the last algorithms run in a series that help clean up the image for understanding. I have been playing with it over the past two weeks and am still getting a handle on the best settings based off the subject matter and complexity of an image. Here are some examples:
After seeing the results of the above tests, I contacted Steve to see if he had any suggestions on settings to try and improve the outcome of the second test image because although you and I can see the details and fold, the algorithm was over looking them based on its interpretation of the information. Steve informed me there was a second algorithm loaded, but it was not as refined as he wants so it is why he did not initially mention it. Below are the results of adding in this second algorithm:
The histogram equalizer boosts the contrast of the tones of an image and with the result on the right there was definite improvement, but the outcome still did not meet the human eye. So with a very complicated image as the first test and something that I consider much simpler, its quite interesting to see how the algorithm is interpreting them. The many colors and high contrast of the more complicated image allowed the algorithm to fine and detect many more edges and details while the simpler images more gradual tonal change and range makes it harder for the algorithm to be able to detect the details that we can naturally see. Here is the final test image and one result that I did:
Above you can see the slight increase in information, the algorithm was able to detect and pull out with the increased contrast from the histogram equalizer’s improvement. There will be further tests run and more algorithms for processing in the future, but this is a great initial start. Steve will be working a few more over the coming month or so, one of the next ones will help detect areas of color to help separate into areas for more process interpretation.
There will be more soon from Chris and Steve and hopefully an update on Tom Petzwinkler’s print. For now, check in next week for the first profile posting for the Intersecting Methods 2016 portfolio to introduce the collaborative duos in preparation of the final portfolio in January.