I am working on a custom board using adv7282-m chip to capture video on agx orin.
I had an issue with video alignment since the resolution is PAL 720x576 and I set the preferred stride to 64. so the actual size now is 736x576 and the line is aligned to 64.
I configured the adv7282-m to send a test pattern.
when I try to capture the video with and I got the first frame as expected but after that I can see that that frames are not aligned and I see a blank line that moves along the frames.
the gstreamer pipe I run is: gst-launch-1.0 v4l2src device=/dev/video2 ! filesink location=test_pattern.raw
If I try to capture the same with v4l2-ctl at first it does not work, but after I run the gstreamer pipe it does, and I get a clear video with pattern and the stide allignment lined on the right.
I run : v4l2-ctl --device /dev/video2 --stream-mmap --stream-to=frame.raw --stream-count=100
when I try to do the same with video and not test pattern the behaviour is quite the same, but I can not get more then a few frames and not constant video.
I will attach the pictures of frames in both cases and the mipi low level trace.
why can’t i capture with v4l2-ctl at first try ?
why can’t I get same results with gstreamer as with v4l2-ctl
why live video behavior is different then test pattern behavior.
it looks like the surface configuration is still mismatched with the real outputs.
how you configure the alignment? there’s CID controls, --set-ctrl preferred_stride=X
Currently I configured it in vi4_registers.h in the kernel (define TEGRA_STRIDE_ALIGNMENT 64)
Do I need to configure anything else ?
I tried with v4l preferred-stride as well.
As far as I understand, this device supports only PAL or NTSC resolutions, and it is hard coded in driver code.
it can work at 720x576 or 720x480, or if we don’t want to use the deinterlacer, we can get half the height ( i.e 288 or 240).
In case of live video it should detect the incoming video and configure the output accordingly, since we intend to use only PAL, this is what I configured as default and I am testing it with test pattern.
If you think I should add something in the driver or device-tree Please let me know.
This is the output of all v4l params:
This is the output when I ask for all v4l params:
v4l2-ctl -d /dev/video3 --all
Driver Info:
Driver name : tegra-video
Card type : vi-output, adv7180 2-0020
Bus info : platform:tegra-capture-vi:0
Driver version : 5.10.104
Capabilities : 0x84200001
Video Capture
Streaming
Extended Pix Format
Device Capabilities
Device Caps : 0x04200001
Video Capture
Streaming
Extended Pix Format
Media Driver Info:
Driver name : tegra-camrtc-ca
Model : NVIDIA Tegra Video Input Device
Serial :
Bus info :
Media version : 5.10.104
Hardware revision: 0x00000003 (3)
Driver version : 5.10.104
Interface Info:
ID : 0x03000047
Type : V4L Video
Entity Info:
ID : 0x00000045 (69)
Name : vi-output, adv7180 2-0020
Function : V4L2 I/O
Pad 0x01000046 : 0: Sink
Link 0x0200004b: from remote pad 0x100000c of entity ‘13e40000.host1x:nvcsi@15a00000-’: Data, Enabled
Priority: 2
Video input : 0 (Camera 0: ok)
Format Video Capture:
Width/Height : 720/576
Pixel Format : ‘UYVY’ (UYVY 4:2:2)
Field : None
Bytes per Line : 1472
Size Image : 847872
Colorspace : sRGB
Transfer Function : Rec. 709
YCbCr/HSV Encoding: ITU-R 601
Quantization : Limited Range
Flags :
Just to make sure I understand.
This is not a supported resolution by the adv7282.
Should I change something in the stride settings or leave it at 64 ? just modify the driver to be hard coded at 768.
Can you please explain why this specific width ?
Hi Jerry,
I was able to capture the test pattern with v4l2-ctl after modifying some configuration in the adv7282 driver, gstreamer is still not working properly, since it expects 720x576 even though it is aware of the stride and that the line is 768 and the image size is 884736 ( I can see it in the GST_DEBUG), however if I ask him to capture 768x576 it reports internal stream error. if you have an idea how to fix this please share. How I can work around this issue with v4l2-ctl capture. my main problem now is that when trying to capture a real live video I get only a few frames. can you help with this ? if so tell me what I need to provide.
for now I am just using a fifo from v4l2-ctl and capture the stream from the fifo and that works fine, but I prefer to handle it directly from gstreamer.
could you please try below to show frame-rate only, and sharing the complete messages for checking.
it shall streaming for 150-frames, it’s around 5-sec for sensor with 30-fps.
for instance, $ gst-launch-1.0 v4l2src device=/dev/video2 num-buffers=150 ! 'video/x-raw, width=1920, height=1080, framerate=30/1, format=UYVY' ! nvvidconv ! fpsdisplaysink text-overlay=0 name=sink_0 video-sink=fakesink sync=0 -v
it just a demonstration of gst pipeline, please revise the support resolution for your use-case.
since you’re unable to access to the stream via v4l2src, may I know what’s the error reported?
please also gather the kernel layer failure for reference, $ dmesg > klogs.txt