This is not a review. If you want a review of this film then read Rotten Tomatoes. I think the whopping 17% is a good barometer for where this film stands. Just because its 3D is NO EXCUSE for bad story telling. If I was to hold it by its technical accomplishments I’d personally rate it much lower, and here is why…

I am going to show you some examples of the bad 3D I saw in the film, and how it could have been corrected easily in the composite. I got all the source anaglyphic images from the sites official website here. The originals are really this bad. You may say “thats not too bad,” maybe for a few seconds on a monitor. Maybe in small doses. This is a major film release by nWave, one of my favorite production companies for stereo 3D films. When FMTTM is projected on a large screen at a theater, or worse at IMAX, and multiplied across many shots for 89 minutes, the 3D cannot compete with our brains comfortably. This produces headaches, motion sickness, and nausea.

Example #1

12The first example of this is here. This family of maggots (who doesn’t want to cuddle on of these) is sleeping on the bed. Examining the anaglyph to the right, we can see a very large amount of horizontal parallax. This much parallax will push the image out into positive Z space or depth. The edges of the bed break frame and vibrate spatially as our brains try to resolve the screen plane conflict. I do not detect any vertical parallax, or toe-in keystone distortion. Over all the shot has decent composition, it just lacks a cohesive 3D inter-ocular that would be correct for this type of shot. Now put on your red/cyan glasses and check this out!

12_fixedI have taken the first image into Adobe Photoshop and simply slid the red channel over about 20 pixels to the left to rest the correct screen plane for this image. I then cropped the image to clip off any missing channel information. Note that now, 90% or more of the image is behind the screen, creating a window into a 3D world. The definition on the characters is much more dimensional, it reads much better, and the edges of the image are no longer fighting your eyes/brain to resolve the breaking frame conflict. Ghosting has been minimized without the use of further color correction or manipulation.

Example #2

9The moon shot seen here is fairly decent, but it lacks a few things important considerations. For one there is no z depth beyond the screen plane. Evey thing exists in positive depth space and breaks the frame. Note how the sign they are sitting on and the leaves at the side of frame fight for ocular dominance. The moon looks flat and should at least be slightly tucked into the screen. Objects seen at great distances should have a parallax of 65 mm at its furthest point on the projected screen. This is your eyes in a parallel configuration. To have no parallax is to say the moon is at the screen plane and everything else in the foreground is popping off into the audience. This is wrong.

9_fixedAgain, I used Photoshop to slide the red channel. You can see the color fringing on the moon setting it back behind the screen. In fact because everything was in positive space, this slight shift sent everything back into the screen, making it much more pleasant to view. If I access to the full layers, I would had made more individual adjustments here to keep the depth a bit more dynamic, but you must be careful, objects seen at infinity will ALWAYS BE 65mm SEPARATION ON THE SCREEN. No matter if its the moon or a tree 100 yards from the viewer. Human depth perception falls off after 100 feet or so. Other depth cues like size, color and atmosphere are what gives it depth at that distances not parallax.

Example #3

6This image represent the very worst that FMTTM has to offer. This frame is a complete mess. We are almost exclusively off into positive depth space, every element is breaking frame, and worst of all, the cameras are toed in so much that its creating keystoning which invites vertical parallax. With a wide shot like this its imperative to keep things from not breaking frame. Let the viewer soak in all the great details. An Apollo capsule like this with all its details would had made such a great 3D shot. Its totally ruined by the volatile stereo. This shot is so harsh, that its difficult to view for more than a few seconds. your brain will fight and fight to resolve the discrepancies but this can only produce severe eyestrain.

6_fixedI had a real hard time fixing this shot. Due to the large amount of keystoning and toe-in adjusting this was a battle with many competing factors. Too make the foreground comfortable, the background got pushed beyond what I would call a safe zone for far or negative screen parallax as I talked about in example #2. This also exacerbated the vertical mismatching on the rear window. Now this brings me to the biggest issue here, and I have a new image for that…

rgb_issuesOn you right you’ll see a breakdown of the three color channels that makeup the original anaglyph. The red channel is from the left eye, and green and blue are from the right. Now there are allot of artifacts and strange rendering issues in the left eye. There are shadows on the astronaut closet to camera that are only in one eye. There are strange artifacts all over the set in that same eye. It almost looks to me that they used some sort of optical flow with a z-buffer to construct the left eye? I am not sure, but that pipe screen left over the middle astronaut looks very mangled in the red channel. These kinds of issues can’t be fixed in post, and should had been re-rendered, and addressed at the layout level. This shot looked bad in the trailer, on the website, and in the finished film. Someone should had caught this.

Example #4

1For my last example I present to you a perfect shot of why depth of field should never be used in stereo 3D. I can’t even attempt to fix this, so I am just going to let you see the original image, un-retouched. Whats good about it? Well the idea is good. Bugs are floating in positive depth space, and there parallax is appropriate for this effect. Now the background has two big issues.  One is its a flat 2D image with ZERO depth. This may work in 2D but it does not in 3D. At the very least this should had been in stereo and out of focus, at least then I could only complain about the depth of field.

Ok, now here’s the fun part. Go ahead and click on that image, look at it full size with your glasses. stare at it for at least a minute. Its bothersome isn’t it? I’m going to tell you why. Your brain and eyes wander through the image focusing at different depths. This is what we do naturally every day. and our focus darts around. If you present a 3D image to the brain your tricking us into a false reality, it looks real, you see depth, your senses are on high. If you don’t allow the brain to focus on the background like in this image, you have a big problem. In 2D you can trick the brain because its 2D, on a sub consciences level, the flattening allows filmic techniques like this to be powerful tricks in the cinematic language. In 3D its not the case. DOF (depth of field) does not work. It should never be used in this way for a 3D film. “But” you say… “I want to direct the viewer focus to here.” Well then, you need to find another way, like have something coming to the screen talking or demanding that you look at it. In an image such as the one above, your eyes will focus on the bugs, and the background will go blurry, just like real life when you focus on something right in front of you, and when your gaze drifts to the back, the foreground will drift into blur. Your eyes and brain will create rack focus for you as you gaze through the imagery, and it will be a joy to behold.  Now, I will concede that a well done image can use some DOF. This opinion is not absolute.  If you do use it, it must be well done and not over used or it all falls apart.

To conclude, I have great respect for nWave and all the artists who worked on this film. I own many of their films and I will buy this one when it comes to HQFS DVD. Getting a film done is such a huge task. I am a very critical person when it comes to 3D. I want it to grow and flourish, but it has to live by the rules of human factors and optical considerations. Digital 3D is just that Digital. Its made 3D more accessible, but it still has to be done with care and consideration to the viewer. I had great hopes for this film. I am deeply disappointed in its lackluster showcase of what great 3D can do.



  • Spamkill says:

    Great evaluation, very informative! I had a gig last year where I had to deal with stereo 3D and it’s amazing the great results you get with settings that would seem too subtle. Seems people have a tendency to try and get “extreme” with separation for maximum effect, even though it’s not necessary. Tweaking stereo 3D is a fine art and I’m sure as artists learn the nuances the quality of films will get better over time.

  • Sean Gleeson says:

    I am astounded by the errors committed by professionals with millions of dollars to spend. Maybe they’ll fix this for the DVD. (But I doubt it.)

    An artist employed by nWave posted one of the anaglyphs to Flickr and asked for a critique. I directed him to this post.

    (I have a minor disagreement with you about depth of field, but it’s not worth going into here. Some day I’ll write about it with examples to explain myself.)

  • 3Dfool says:

    I’d love to hear your thoughts are on depth of field. I have not seen a good use of it in 3D to date. Now I have seen allot of 3D films and supervised the stereo in over 8 films myself. The only way I think it can be used is as a quick shot with slight blur. The above blurred background is unforgivable as a 2D flat image.

  • Sean Gleeson says:

    No argument there. The blurry 2D background behind the flies is not a good effect.

    One of these days, I’ll write down all my thoughts on depth of field, with many examples. But for now, here is a brief summary, with one example. If it’s done well, a shallow depth of field should be no impediment to a good stereo image. I think the notion, that 3-D pictures have to be in focus all the way to infinity, is a myth.

    The common objection, that “your eyes will want to see the background but will be frustrated by its being out of focus,” is of course just as applicable to 2D photography as to stereo, and just as untrue.

    I respect your opinion, I really do, but I think maybe you’ve just never seen examples of good stereo images that use depth of field properly. Look at this orchid (LINK), look at it as long as you want. I think it is a beautiful example of effective DOF in a stereo photo. If your eyes start to protest, and it hurts your brain to see this picture, I will take it all back and concede the point to you. But I don’t think that will happen.

    See also Alexander Lentjes’s essay on 3D (LINK). His paragraph about depth of field is about halfway down the page.

  • 3Dfool says:

    That is a very nice photo. It works, but it doesn’t prove that DOF can be used in any situation. My eyes want to resolve all part s of the image. What I’m presenting is the Human Factor, the way our brain and ocular impulses work. When an image is flat and 2D, you can accept this as the norm because its not being interrupted by your brain in the same way as a 3D stereo image would. When the process of stereopsis occurs, your brain will try to focus on blurry parts. This is a part of the way we are made. Its on a subconscious level. It may not bother you or me as much as another person, but over all consideration must be given to.

    Ive done allot of experimenting with stereoscopic DOF professionally and the imagery without it has always made better stereo immersion every time. According to my good friend and colleague John Merrit, (Internationally recognized expert on Human Factors Stereoscopic Displays and Applications) the brain when presented to imagery in stereo interprets it as a hyper reality and needs to operate as it were reality in the ability to change focus.

    Now, I will concede that a well done image like the orchid can use some DOF, but this is not a true or untrue kinda of thing. I feel that for 90% of shots you should not use it. If you do use it, it must be well done, and not over used or it all falls apart.

  • Dear 3D-fool,

    I think you got this one all wrong!

    You have evaluated the images as stills on a small computer monitor as opposed to immersive images on a large screen, as part of a continously flowing film experience.

    1) You are talking about edges breaking the frame and causing eye strain. Well, the movie is designed for immersive, virtually FRAMELESS screens. In giant screens, frame cut-off of objects happens in peripheral vision, and hence, the restrictions you emphasize to observe, do not apply there.

    2) You suggest to correct the parallax by large amounts, which (while suitable for a small computer monitor) would not work at all on a giant screen. You mention yourself, that parallax at infinity should be no more than 65mm. How big a parallax at infinity do you think your modifications would introduce on a giant IMAX screen?

    3) You don’t like depth-of-field in 3D. Ok, that is subjective. I love that shot, but it should be seen as part of a continuos film, where it adds variation and focus attention on the flies, not stared at as a still image “for at least a minute” on a small computer monitor!

    I can add to the above, that for the DVD release, the parallax was adjusted on a scene-by-scene basis, taking careful attention to object of interest and frame cut-off, since this is intended for viewing on a small screen (80″ or less). In the big screen release, the film has exactly the parameters it should have to make it
    a masterpiece in the art of stereoscopic 3D.

    So honestly: I think the pros got the 3D all right on this one!

    Steen Iversen.
    PS: did you even see it on a big screen?

  • 3Dfool says:

    Thank you for your passioned opinion. I am a professional stereographer, my opinion comes from many years creating 3D on large formats, including IMAX. No matter what your screen size, distant parallax should never ever be more than 65mm on a projected screen. Human eyes simply do not work any other way. I did see this in IMAX and it was very uncomfortable. Saying that the IMAX screen is so huge that designers can be lazy about frame violations is a poor excuse.


  • Dear 3Dfool,

    Finally a fellow stereographer who dares to speak out against some of today’s abuses of stereoscopic cinematography. Ben Stassen of NWave is somebody who strongly believes in using converging cameras and presenting a 3-D composition as extreme as possible in every shot presented. Yes, that creates fatigue within 3 seconds and it is definitely NOT good 3-D film making. It’s like having the volume of the music at 10 all the time – blasting away your eardrums, and for no particular reason other than to be LOUD. Well, NWave’s pictures do just this but with the stereo photography of their pictures and the end result is, IMO, a manual of what not to do in stereoscopic cinema if viewer comfort and refined cinematography is of any importance.

    I can see that the stills used for publication have been reconverged (recentrated, stereo base shifted) to produce better results on a computer monitor, but the interaxials and convergence values in the film were definitely way, way too large for any type of cinema screen. To me it really looks like the 3-D imagery was all previewed and approved on a computer monitor rather than a big cinema screen. Because, honestly, frame distances of this magnitude were not even employed by the Lumiere brothers when they experimented with the first 3-D film cameras. I can only conclude one thing: Ben Stassen has a very big head with eyes set apart twice as far as the rest of us (either that, or he just sits VERY far away from his monitor)…

    I can’t see evidence of converging cameras in these stills, but I can see heavy geometric correction on wide angle lenses to avoid vertical parallax. In fact, this correction may well be hiding converging cameras so don’t pin me down on this one… Convergence is a tool, albeit be it an advanced one, to use after employing the interaxial and then stereo base shifting (recentration / reconvergence) to control the depth of the image. Convergence can be used after these elements to give extra volume to objects, like using a shorter lens in 2-D. But that is not how NWave has used convergence throughout its releases; they rather use it to set the point of convergence. A crude use of a refined tool – a bit like using the zoom to get closer to your subject matter rather than trucking in the camera.

    Regarding the use of DOF: tht particular sill you use as an example is actually a lot better for it. Had the background been in focus, your eyes would have had to work extra hard to focus on the background. That’s got a lot to do with diopter limitation of viewing zones and the fact that DOF can take away this hindering limitation of stereoscopic vision. I’m with NWave on this point. Remember that the artistic results are more important than the technical means by which they are achieved.

    Alexander Lentjes
    3-D Revolution Productions

  • Daniel Smith says:

    I concede that you can use the DOF in this example, however it is a 2D flat cheat. They should at the very least used a stereo pair to defocus.

  • Daniel,

    The depth of the background plane may very well be at infinity – which, in 3-D, is pretty close by. With objects like the flies that close to the camera the background would have been far away and may thus have been beyond the plane of perceivable depth.

    Even if it isn’t actually that, for the sake of a calm, vaguely watchable 3-D image with such enormous negative parallax, a flat background plane is the right choice. Again, this is directly related to diopter-dictated limitations of viewing zones and resulting viewing comfort.

    In contrast, a movie like Spy Kids 3-D uses full depth all the time, creating a lot of viewer discomfort. Try to focus on an out-of-screen hand or stick or whatever when the background imagery is deep and rich in detail – it is far from a pleasant visual experience.


  • Allan Silliphant says:

    Hi,3DF. You are basically correct…Bravo. If you check out
    my website, you’ll see many examples along the same lines. Our diopter corrected glasses cure
    most of the difraction, to be seen with red “gel filters”.
    Part of the difference between Anachrome RED/CYAN vrs.
    the BLUE/YELLOW Color Code alternative, is that the red
    ghosting is more visible with Anachrome, and it doesn/t display RED uniforms and RED sports car nearly as well.
    Otherwise, much of the imaging is cleaner and MUCH sharper. You can read fine print in either eye! Now if you take Anachrome to the next level, the ghosting in the background goes away. This is doeable in post, or by shooting 3 different views at once. Those Are: Left,
    center, and right. Left and center are used for the background plate. Left and right make up the elements of the main subject placed at or near the “stereo window”,
    as a “paste over”, like in green screen. Scaling up the
    main subject by 2% allows the outer layer to mask the same objects on the background plate. Softening, darkening, and movement of the flatter background closer
    behind the window objects or actors, ill the ghosting,
    and allows the 3D to be deeper on the main subject, due to the wider stereo base on the forward plate. I hope
    this makes sense, as written, but it actually works quite well. I’m building a wide, 3 barrel lens assembly for the new 36mm sensor on the new 5D, Mark 2. Hi, Alexander L., if you see this, I’ll try to touch base while in Europe( Brussels) later in February (this month). Allan S.

  • Daniel Smith says:

    Hi Allan. I agree completely with you. In fact I own several high quality Anachrome glasses myself, and I use them for red/cyan anaglyphs all the time.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.