I appreciate your thoughts on this. I haven't had a chance to play around with the Reformat settings as I no longer have the ama material to link to but I have done some thinking about this and I think stretch, which appears to be the default as I never set that parameter but found it set to stretch, seems a safer choice given my concerns of black lines at top or bottom.
In theory if the ireformat setting is letter boxed or pillar boxed there would be wasted pixels either vertically or horizontally to fill out the 4096x2160 raster of the project. Stretching would give more pixels of resolution to the shorter side that would just be filled with black. I'm not sure if the stretching out to fill the short side induces any artifacts so maybe that is a concern.
I haven't played with center and maintain size yet but I would think similar concerns might be there or does that setting just crop the excess of the longer side? Until I play around I won't be sure.
To me choosing stretch means I've completely filled the raster and any frame flex will only bite into active video with no chance of black lines that could be part of the letter or pillar box. I will have to play around when I can get the camera originals to play with. One thing that pops in my head is the old 480 vs. 486 6 line stretch that tended to create odd artifacts, hence the import setting that didn't stretch out the video to fill 486 with DV footage that was only 480. So far I haven't noticed any sort of artifacts like that. I wonder if those old DV issues even apply now that we are dealing with square pixels instead of the anamorphic pixels of the Pal and Ntsc days.
---In Avid-L2@yahoogroups.com, <cutandcover@...> wrote :
On Mon, Jan 30, 2017 at 8:16 PM, John Moore bigfish@... [Avid-L2] <Avid-L2@yahoogroups.com> wrote:I posted about my current 4K project a while back. I have two shoots that were done on Red Cameras. We deliver a 4096x2160 4K master file. One shoot was done out of country and most of the footage shows up as 4K with an image size of 4096x2155. Once transcoded to DNXHRHQX the raster dimension is 4096x2160. My understanding of this is that with the ama link the default is for the transcodes to inherit the projects raster dimensions. I later found with 5K and 6K footage I could set the transcode to keep the source raster dimensions. Funny thing is that even when Avid transcodes keeping the source raster dimensions if I take the resulting .mxf file into media info program the metadata shows the dimensions as 4096x2160 so I don't really know what maintaining the source raster dimensions does. Within Avid clips transcoded maintaining source raster dimensions look identical to clips transcoded to the project dimensions 4096x2160 so what does the raster dimension parameter really mean and do in Avid land?Here's what I think is happening: when you transcode and do NOT choose to bake in FrameFlex/Reformat, Avid transcodes to MXF at the project size, but the metadata of the source clips is preserved and used for future FrameFlex/Reformatting. This is a handy feature that helps if you are eventually going to relink to the camera sources. If you are not going to relink to the source, then I would set the Reformat to Stretch and bake in the FrameFlex to take advantage of all of the project pixels. Then you can FrameFlex/Reformat to however you want down the line, knowing you started with a full 4096x2160 worth of pixels to use.Not only do I have 5K and 6K footage they managed to shoot 16:9 not the 1.9:1 of a 4096x2160 aspect ratio. IIRC some of the footage is 4800x2700 for a 16:9 aspect ratio. With this footage it gets stretched horizontally to fill the frame. In the source settings frame flex tab it sees the correct image size and aspect ratio of 16:9. If I set the frame flex aspect ratio in the window below to 1.9:1 the image gets stretched vertically to crop top and bottom enough to maintain a proper aspect ration in the final image. There is also some 6K footage that is 1.94:1 aspect ratio. These clips are being stretched vertically slightly. Again in frame flex the image size and image aspect ration are displayed at the top of the frame flex tab window. When I set the frame flex aspect ratio to 1.9:1 it stretches the image horizontally to crop the sides of the image so it also displays in the correct aspect ratio. These are all pretty obvious to see but when it comes the clips that are 4096x2155 it's very hard to see what, if anything, is happening.I think you might be better off starting with the Reformat control inside FrameFlex and choosing Center Keep Size. Then you can just reposition the FrameFlex box to the framing you like per shot. I may be wrong about how flexible FrameFlex can be, but I do not remember having to make those aspect adjustments unless I was intentionally changing the aspect of the source to fit raster.The 4096x2155 image size clips display proper info in the Frame Flex tab upper window. The default in the middle of the window for the frame flex aspect ratio says 1.9:1 custom. I don't know where the "custom" comes from as I don't recall selecting a custom size. My concern is that given the clips have an image size of 4096x2155 but a raster dimension of 4096x2160 are there black lines at the top and bottom? It's hard to tell on a monitor as they are being letterboxed to show in proper aspect ration and the difference of 5 lines is pretty hard to see. The problem is compounded by the fact I don't have a 4K monitor or scope. I am running a Nitris DX and I can compare the HD downconvert output in line mode on my tek scope and things seem to be ending on the same lines between the 4096x2155 clips and the other 5K and 6K clips so I think I'm okay.You'd better export a few frames of something with an obvious color along the edge and open it up in QuickTime to check that you're filling the frame completely.The bottom line seems to be that with these oversized clips of 5K and 6K it seems Avid's ama transcode treat the media like an import that is assumed to be sized for the current project. Avid fills the raster with the clips media whether it is distorting the aspect ratio or not. Then with Frame Flex by setting the frame flex aspect ration it will pluck out the proper pixels to display and crop to the project raster while maintaining a proper aspect ratio. Is this what is really going on? It seems to be but with only a 5 line difference the various parameters aren't very obvious to see on output.This is always true if you have Reformat set to Stretch. It will always stretch the source to fit the project raster. You can change that if that's not what you want…On a personal note all these extra "flexible" ways of shooting seem like way more trouble than they are worth. I don't do high end feature work where there is time and more money to tweak dailies and sort out all these variables but in the one stop shop world of documentary reality TV it sure seems like overkill. I don't want to be lazy but I really don't see a practical way to do anything but choose camera metadata on the Red footage. In practice do people really like going into Rec CineX and fiddling with all the controls they have? I've always worked in a world where the camera folks shot it the way they wanted it to look and in post we went from there. Other than compensating for major errors in shooting what is the attraction to do in essence a digital dallies pass in an all Avid workflow? I understand when shooting Logs etc... it's helpful to give offline a standard LUT to make the image look reasonable but I just don't understand why this Red workflow seems so convoluted.I would only mess with the RED metadata if you know what you're doing there, or if you think access to the RAW could be beneficial for some shots. You don't have to go to RedCine-X for it, if you have the clips linked in MC, you have access to the RED metadata controls in Source Settings. To me, it sounds like production is handing you more latitude than you need. RED is overkill for anything but fine grading - the camera RAW is in need of debayering and "developing". I would think a production like this would shoot a linear camera instead to save everyone's time. They could've shot Alexa linear straight to ProRes 4K and you'd be ready to edit once the media was copied.
Posted by: bigfish@pacbell.net
Reply via web post | • | Reply to sender | • | Reply to group | • | Start a New Topic | • | Messages in this topic (3) |
No comments:
Post a Comment