Interestingly, you can still use PhotoCatch even without a compatible Mac. It’s worth noting that since the app based on Apple’s API, PhotoCatch for macOS requires an Intel Mac with 16GB RAM and an AMD GPU of at least 4GB VRAM, or any Mac with the M1 chip. You’ll have an object rendered in a USDZ file that can be shared with other iPhone and iPad users for AR interactions, or even imported into other apps like Cinema 4D. Once you open it, all you need to do is select a folder with all the photos you have taken of the object, choose the settings you want, and click the Create Model button. The app works just as I described in the article about the new API, but it makes the process more intuitive for users who are not familiar with Xcode or Terminal. Now this will no longer be necessary, thanks to PhotoCatch. However, since the API was recently announced and is still in beta, you had to compile it manually using sample code from Apple.
With the Object Capture API, users can capture objects and turn them into 3D models in just a few minutes. Now developer Ethan Saadia has created PhotoCatch, a new app based on this new API that makes the whole process simpler. Earlier this month, I detailed my experience with Apple’s new Object Capture API, which was introduced with macOS Monterey to let users create 3D models using the iPhone camera.