From Falk.Schiffner at dlr.de Thu Oct 2 07:14:45 2025 From: Falk.Schiffner at dlr.de (Falk.Schiffner at dlr.de) Date: Thu, 2 Oct 2025 11:14:45 +0000 Subject: [SIS-MIA] Subjective Video Quality Assessement Brief Guideline Message-ID: <30cd25978fd74b0ca1e3a6eb1311820c@dlr.de> Hello Everybody, as said at the last meetings I provide a brief introduction to subjective video quality assessment. Hope the instructions are clear (did not do a spelling check, sorry!) If there are any questions on that topic, please contact me for clarification! See you soon and hope that everybody is well off! Best Falk --------------------------------------- German Aerospace Center (DLR e.V.) Space Operations & Astronaut Training | Communication & Ground Stations Rutherfordstrasse 2 | 12489 Berlin Dr.-Ing. F a l k R. S c h i f f n e r Ground Operations Group System Engineer DTN / CCSDS Delegate (DTN, Audio-Video Assessment) Telefon: +49 (0)30 670558177 | Email: falk.schiffner at dlr.de www.DLR.de [cosmickiss] -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 16548 bytes Desc: image001.jpg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: SubjectiveVideoQualityTesting.pdf Type: application/pdf Size: 3005634 bytes Desc: SubjectiveVideoQualityTesting.pdf URL: From walt.lindblom at nasa.gov Thu Oct 2 10:30:20 2025 From: walt.lindblom at nasa.gov (Lindblom, Walter E. (MSFC-IS64)[AEGIS]) Date: Thu, 2 Oct 2025 14:30:20 +0000 Subject: [SIS-MIA] [EXTERNAL] [BULK] Subjective Video Quality Assessement Brief Guideline In-Reply-To: <30cd25978fd74b0ca1e3a6eb1311820c@dlr.de> References: <30cd25978fd74b0ca1e3a6eb1311820c@dlr.de> Message-ID: Falk, This is excellent! In our testing, will we use our reference clips, both ideal and degraded, as the clips to be evaluated? That would give a direct correlation between this test and the VMAF test scores. Also, you mentioned in the Influencing Factors section that test participants have a tendency to not use the ends of the scale as they expect there may be better or worse samples to come. Could this be overcome by showing a reference clip with no issues followed by the same clip with the worst rating we plan to show at the beginning of the test? I believe what you describe has a high probability of happening. One of my daughters was a gymnast and always hated being the first one to do any event. Whoever went first would get a median score, not matter how well they performed, which led to meet scores being artificially low in many cases. This was the norm, not the exception. If we can mitigate that, I think it will help and give us more uniform scoring. My thought is to summarize the testing method for the yellow book. Instead of the influencing factors you listed, put in that our testing, conducted both in Germany and the US, follows ITU guidelines for subjective video ratings. [cid:image001.png at 01DB7BB7.4AC15F20] Walt Lindblom Video Engineer, NASA Imagery Experts Group AEGIS Leidos Inc. MSFC Building 4485 256-684-0580 walt.lindblom at nasa.gov From: SIS-MIA on behalf of Falk.Schiffner--- via SIS-MIA Date: Thursday, October 2, 2025 at 6:15?AM To: sis-mia at mailman.ccsds.org Subject: [EXTERNAL] [BULK] [SIS-MIA] Subjective Video Quality Assessement Brief Guideline CAUTION: This email originated from outside of NASA. Please take care when clicking links or opening attachments. Use the "Report Message" button to report suspicious messages to the NASA SOC. Hello Everybody, as said at the last meetings I provide a brief introduction to subjective video quality assessment. Hope the instructions are clear (did not do a spelling check, sorry!) If there are any questions on that topic, please contact me for clarification! See you soon and hope that everybody is well off! Best Falk ??????????????????????????????????????? German Aerospace Center (DLR e.V.) Space Operations & Astronaut Training | Communication & Ground Stations Rutherfordstrasse 2 | 12489 Berlin Dr.-Ing. F a l k R. S c h i f f n e r Ground Operations Group System Engineer DTN / CCSDS Delegate (DTN, Audio-Video Assessment) Telefon: +49 (0)30 670558177 | Email: falk.schiffner at dlr.de www.DLR.de [cosmickiss] -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 16548 bytes Desc: image001.jpg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 6563 bytes Desc: image001.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.png Type: image/png Size: 360 bytes Desc: image002.png URL: From Falk.Schiffner at dlr.de Thu Oct 2 14:59:03 2025 From: Falk.Schiffner at dlr.de (Falk.Schiffner at dlr.de) Date: Thu, 2 Oct 2025 18:59:03 +0000 Subject: [SIS-MIA] [EXTERNAL] [BULK] Subjective Video Quality Assessement Brief Guideline In-Reply-To: References: <30cd25978fd74b0ca1e3a6eb1311820c@dlr.de> Message-ID: Hi Walt, Yes, that is why you put a short training in the beginning showing the range of degradations. Further, what you described is an order effect. Therefore, one include one degradation/file (at least twice in the test set) and randomize the order of all test clips for each test participant. In this was way, you avoid that always the samples are rated in the same sequence. Attached are two test playlists from an old speech quality experiment, that I did (:TB ? Training begins / :TE ? Training ends). When you compare the two lists, you can see that the trainings phase is identical for both test participants, only the test set is randomize. Nevertheless both have to rate the same files. Further you can see, that after the training, the test condition e.g. NB_xxxx.wav (Narrow Band) is several times in the test running order. These lists were created specific for test participant 01 (VP01) and 02 (VP02). So, for example one have one test condition/file 4x in the set and one have 20 test participants; this would gather 80 quality ratings for the test condition. There are enough data points to average out order effects. For the Yellow Book, yes, we do not need to describe all the factors. This is for us, to set up a test that will limit ?bad? influences and give us meaningful results. So, we only should describe and document what we did in the experiment/study. Best Falk Von: Lindblom, Walter E. (MSFC-IS64)[AEGIS] Gesendet: Donnerstag, 2. Oktober 2025 16:30 An: Schiffner, Falk Ralph ; sis-mia at mailman.ccsds.org Betreff: Re: [EXTERNAL] [BULK] [SIS-MIA] Subjective Video Quality Assessement Brief Guideline Falk, This is excellent! In our testing, will we use our reference clips, both ideal and degraded, as the clips to be evaluated? That would give a direct correlation between this test and the VMAF test scores. Also, you mentioned in the Influencing Factors section that test participants have a tendency to not use the ends of the scale as they expect there may be better or worse samples to come. Could this be overcome by showing a reference clip with no issues followed by the same clip with the worst rating we plan to show at the beginning of the test? I believe what you describe has a high probability of happening. One of my daughters was a gymnast and always hated being the first one to do any event. Whoever went first would get a median score, not matter how well they performed, which led to meet scores being artificially low in many cases. This was the norm, not the exception. If we can mitigate that, I think it will help and give us more uniform scoring. My thought is to summarize the testing method for the yellow book. Instead of the influencing factors you listed, put in that our testing, conducted both in Germany and the US, follows ITU guidelines for subjective video ratings. [cid:image003.png at 01DC33DB.C442AA70] Walt Lindblom Video Engineer, NASA Imagery Experts Group AEGIS Leidos Inc. MSFC Building 4485 256-684-0580 walt.lindblom at nasa.gov From: SIS-MIA > on behalf of Falk.Schiffner--- via SIS-MIA > Date: Thursday, October 2, 2025 at 6:15?AM To: sis-mia at mailman.ccsds.org > Subject: [EXTERNAL] [BULK] [SIS-MIA] Subjective Video Quality Assessement Brief Guideline CAUTION: This email originated from outside of NASA. Please take care when clicking links or opening attachments. Use the "Report Message" button to report suspicious messages to the NASA SOC. Hello Everybody, as said at the last meetings I provide a brief introduction to subjective video quality assessment. Hope the instructions are clear (did not do a spelling check, sorry!) If there are any questions on that topic, please contact me for clarification! See you soon and hope that everybody is well off! Best Falk ??????????????????????????????????????? German Aerospace Center (DLR e.V.) Space Operations & Astronaut Training | Communication & Ground Stations Rutherfordstrasse 2 | 12489 Berlin Dr.-Ing. F a l k R. S c h i f f n e r Ground Operations Group System Engineer DTN / CCSDS Delegate (DTN, Audio-Video Assessment) Telefon: +49 (0)30 670558177 | Email: falk.schiffner at dlr.de www.DLR.de [cosmickiss] -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image003.png Type: image/png Size: 6563 bytes Desc: image003.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image004.png Type: image/png Size: 360 bytes Desc: image004.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image005.jpg Type: image/jpeg Size: 16548 bytes Desc: image005.jpg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: VP01_PL_PL.lst Type: application/octet-stream Size: 2241 bytes Desc: VP01_PL_PL.lst URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: VP02_PL_PL.lst Type: application/octet-stream Size: 2241 bytes Desc: VP02_PL_PL.lst URL: