How does a predictive coding aid in lossless compression?Lossless Video Compression PipelineCompressing normally distributed dataHow can GIF compression be lossless if the maximum # of colors is 256?Hash for verifying both compressed and uncompressed data?Can random suitless $52$ playing card data be compressed to approach, match, or even beat entropy encoding storage? If so, how?Arithmetic coding and “the optimal compression ratio”How does adaptative Huffman coding (Vitter algorithm) work?Universal Lossless Compression?Is von Neumann's randomness in sin quote no longer applicable?Algorithm for estimating lossless compression factor

CAST throwing error when run in stored procedure but not when run as raw query

Ambiguity in the definition of entropy

Why can't we play rap on piano?

Why doesn't using multiple commands with a || or && conditional work?

What reasons are there for a Capitalist to oppose a 100% inheritance tax?

Why is consensus so controversial in Britain?

What's the in-universe reasoning behind sorcerers needing material components?

Plagiarism or not?

Why is this clock signal connected to a capacitor to gnd?

iPad being using in wall mount battery swollen

Personal Teleportation: From Rags to Riches

Detention in 1997

Size of subfigure fitting its content (tikzpicture)

Bullying boss launched a smear campaign and made me unemployable

Avoiding direct proof while writing proof by induction

Zip/Tar file compressed to larger size?

Can I run a new neutral wire to repair a broken circuit?

Should I cover my bicycle overnight while bikepacking?

Gatling : Performance testing tool

Examples of smooth manifolds admitting inbetween one and a continuum of complex structures

Is it inappropriate for a student to attend their mentor's dissertation defense?

How does a predictive coding aid in lossless compression?

Are there any examples of a variable being normally distributed that is *not* due to the Central Limit Theorem?

Assassin's bullet with mercury



How does a predictive coding aid in lossless compression?


Lossless Video Compression PipelineCompressing normally distributed dataHow can GIF compression be lossless if the maximum # of colors is 256?Hash for verifying both compressed and uncompressed data?Can random suitless $52$ playing card data be compressed to approach, match, or even beat entropy encoding storage? If so, how?Arithmetic coding and “the optimal compression ratio”How does adaptative Huffman coding (Vitter algorithm) work?Universal Lossless Compression?Is von Neumann's randomness in sin quote no longer applicable?Algorithm for estimating lossless compression factor













4












$begingroup$


I'm working on this lab where we need to apply a lossless predictive coding to an image before compressing it (with Huffman, or some other lossless compression algorithm).



From the example seen below, it's pretty clear that by pre-processing the image with predictive coding, we've modified its histogram and concentrated all of its grey levels around 0. But why exactly does this aid compression?



Is there maybe a formula to determine the compression rate of Huffman, knowing the standard deviation and entropy of the original image? Otherwise, why would the compression ratio be any different; it's not like the range of values has changed between the original image and pre-processed image.





Thank you in advance,



Liam.










share|cite|improve this question









$endgroup$
















    4












    $begingroup$


    I'm working on this lab where we need to apply a lossless predictive coding to an image before compressing it (with Huffman, or some other lossless compression algorithm).



    From the example seen below, it's pretty clear that by pre-processing the image with predictive coding, we've modified its histogram and concentrated all of its grey levels around 0. But why exactly does this aid compression?



    Is there maybe a formula to determine the compression rate of Huffman, knowing the standard deviation and entropy of the original image? Otherwise, why would the compression ratio be any different; it's not like the range of values has changed between the original image and pre-processed image.





    Thank you in advance,



    Liam.










    share|cite|improve this question









    $endgroup$














      4












      4








      4





      $begingroup$


      I'm working on this lab where we need to apply a lossless predictive coding to an image before compressing it (with Huffman, or some other lossless compression algorithm).



      From the example seen below, it's pretty clear that by pre-processing the image with predictive coding, we've modified its histogram and concentrated all of its grey levels around 0. But why exactly does this aid compression?



      Is there maybe a formula to determine the compression rate of Huffman, knowing the standard deviation and entropy of the original image? Otherwise, why would the compression ratio be any different; it's not like the range of values has changed between the original image and pre-processed image.





      Thank you in advance,



      Liam.










      share|cite|improve this question









      $endgroup$




      I'm working on this lab where we need to apply a lossless predictive coding to an image before compressing it (with Huffman, or some other lossless compression algorithm).



      From the example seen below, it's pretty clear that by pre-processing the image with predictive coding, we've modified its histogram and concentrated all of its grey levels around 0. But why exactly does this aid compression?



      Is there maybe a formula to determine the compression rate of Huffman, knowing the standard deviation and entropy of the original image? Otherwise, why would the compression ratio be any different; it's not like the range of values has changed between the original image and pre-processed image.





      Thank you in advance,



      Liam.







      image-processing data-compression huffman-coding






      share|cite|improve this question













      share|cite|improve this question











      share|cite|improve this question




      share|cite|improve this question










      asked 8 hours ago









      Liam F-ALiam F-A

      261




      261




















          1 Answer
          1






          active

          oldest

          votes


















          6












          $begingroup$

          Huffman coding, as usually applied, only considers the distribution of singletons. If $X$ is the distribution of a random singleton, then Huffman coding uses between $H(X)$ and $H(X)+1$ bits per singleton, where $H(cdot)$ is the (log 2) entropy function.



          In contrast, predictive coding can take into account correlations across data points. As a simple example, consider the following sequence:
          $$
          0,1,2,ldots,255,0,1,2,ldots,255,ldots
          $$

          Huffman coding would use 8 bits per unit of data, whereas with predictive coding we could get potentially to $O(log n)$ bits for the entire sequence.






          share|cite|improve this answer









          $endgroup$













            Your Answer





            StackExchange.ifUsing("editor", function ()
            return StackExchange.using("mathjaxEditing", function ()
            StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
            StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
            );
            );
            , "mathjax-editing");

            StackExchange.ready(function()
            var channelOptions =
            tags: "".split(" "),
            id: "419"
            ;
            initTagRenderer("".split(" "), "".split(" "), channelOptions);

            StackExchange.using("externalEditor", function()
            // Have to fire editor after snippets, if snippets enabled
            if (StackExchange.settings.snippets.snippetsEnabled)
            StackExchange.using("snippets", function()
            createEditor();
            );

            else
            createEditor();

            );

            function createEditor()
            StackExchange.prepareEditor(
            heartbeatType: 'answer',
            autoActivateHeartbeat: false,
            convertImagesToLinks: false,
            noModals: true,
            showLowRepImageUploadWarning: true,
            reputationToPostImages: null,
            bindNavPrevention: true,
            postfix: "",
            imageUploader:
            brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
            contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
            allowUrls: true
            ,
            onDemand: true,
            discardSelector: ".discard-answer"
            ,immediatelyShowMarkdownHelp:true
            );



            );













            draft saved

            draft discarded


















            StackExchange.ready(
            function ()
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fcs.stackexchange.com%2fquestions%2f106450%2fhow-does-a-predictive-coding-aid-in-lossless-compression%23new-answer', 'question_page');

            );

            Post as a guest















            Required, but never shown

























            1 Answer
            1






            active

            oldest

            votes








            1 Answer
            1






            active

            oldest

            votes









            active

            oldest

            votes






            active

            oldest

            votes









            6












            $begingroup$

            Huffman coding, as usually applied, only considers the distribution of singletons. If $X$ is the distribution of a random singleton, then Huffman coding uses between $H(X)$ and $H(X)+1$ bits per singleton, where $H(cdot)$ is the (log 2) entropy function.



            In contrast, predictive coding can take into account correlations across data points. As a simple example, consider the following sequence:
            $$
            0,1,2,ldots,255,0,1,2,ldots,255,ldots
            $$

            Huffman coding would use 8 bits per unit of data, whereas with predictive coding we could get potentially to $O(log n)$ bits for the entire sequence.






            share|cite|improve this answer









            $endgroup$

















              6












              $begingroup$

              Huffman coding, as usually applied, only considers the distribution of singletons. If $X$ is the distribution of a random singleton, then Huffman coding uses between $H(X)$ and $H(X)+1$ bits per singleton, where $H(cdot)$ is the (log 2) entropy function.



              In contrast, predictive coding can take into account correlations across data points. As a simple example, consider the following sequence:
              $$
              0,1,2,ldots,255,0,1,2,ldots,255,ldots
              $$

              Huffman coding would use 8 bits per unit of data, whereas with predictive coding we could get potentially to $O(log n)$ bits for the entire sequence.






              share|cite|improve this answer









              $endgroup$















                6












                6








                6





                $begingroup$

                Huffman coding, as usually applied, only considers the distribution of singletons. If $X$ is the distribution of a random singleton, then Huffman coding uses between $H(X)$ and $H(X)+1$ bits per singleton, where $H(cdot)$ is the (log 2) entropy function.



                In contrast, predictive coding can take into account correlations across data points. As a simple example, consider the following sequence:
                $$
                0,1,2,ldots,255,0,1,2,ldots,255,ldots
                $$

                Huffman coding would use 8 bits per unit of data, whereas with predictive coding we could get potentially to $O(log n)$ bits for the entire sequence.






                share|cite|improve this answer









                $endgroup$



                Huffman coding, as usually applied, only considers the distribution of singletons. If $X$ is the distribution of a random singleton, then Huffman coding uses between $H(X)$ and $H(X)+1$ bits per singleton, where $H(cdot)$ is the (log 2) entropy function.



                In contrast, predictive coding can take into account correlations across data points. As a simple example, consider the following sequence:
                $$
                0,1,2,ldots,255,0,1,2,ldots,255,ldots
                $$

                Huffman coding would use 8 bits per unit of data, whereas with predictive coding we could get potentially to $O(log n)$ bits for the entire sequence.







                share|cite|improve this answer












                share|cite|improve this answer



                share|cite|improve this answer










                answered 8 hours ago









                Yuval FilmusYuval Filmus

                196k15184349




                196k15184349



























                    draft saved

                    draft discarded
















































                    Thanks for contributing an answer to Computer Science Stack Exchange!


                    • Please be sure to answer the question. Provide details and share your research!

                    But avoid


                    • Asking for help, clarification, or responding to other answers.

                    • Making statements based on opinion; back them up with references or personal experience.

                    Use MathJax to format equations. MathJax reference.


                    To learn more, see our tips on writing great answers.




                    draft saved


                    draft discarded














                    StackExchange.ready(
                    function ()
                    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fcs.stackexchange.com%2fquestions%2f106450%2fhow-does-a-predictive-coding-aid-in-lossless-compression%23new-answer', 'question_page');

                    );

                    Post as a guest















                    Required, but never shown





















































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown

































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown







                    Popular posts from this blog

                    How to create a command for the “strange m” symbol in latex? Announcing the arrival of Valued Associate #679: Cesar Manara Planned maintenance scheduled April 23, 2019 at 23:30 UTC (7:30pm US/Eastern)How do you make your own symbol when Detexify fails?Writing bold small caps with mathpazo packageplus-minus symbol with parenthesis around the minus signGreek character in Beamer document titleHow to create dashed right arrow over symbol?Currency symbol: Turkish LiraDouble prec as a single symbol?Plus Sign Too Big; How to Call adfbullet?Is there a TeX macro for three-legged pi?How do I get my integral-like symbol to align like the integral?How to selectively substitute a letter with another symbol representing the same letterHow do I generate a less than symbol and vertical bar that are the same height?

                    Българска екзархия Съдържание История | Български екзарси | Вижте също | Външни препратки | Литература | Бележки | НавигацияУстав за управлението на българската екзархия. Цариград, 1870Слово на Ловешкия митрополит Иларион при откриването на Българския народен събор в Цариград на 23. II. 1870 г.Българската правда и гръцката кривда. От С. М. (= Софийски Мелетий). Цариград, 1872Предстоятели на Българската екзархияПодмененият ВеликденИнформационна агенция „Фокус“Димитър Ризов. Българите в техните исторически, етнографически и политически граници (Атлас съдържащ 40 карти). Berlin, Königliche Hoflithographie, Hof-Buch- und -Steindruckerei Wilhelm Greve, 1917Report of the International Commission to Inquire into the Causes and Conduct of the Balkan Wars

                    Чепеларе Съдържание География | История | Население | Спортни и природни забележителности | Културни и исторически обекти | Религии | Обществени институции | Известни личности | Редовни събития | Галерия | Източници | Литература | Външни препратки | Навигация41°43′23.99″ с. ш. 24°41′09.99″ и. д. / 41.723333° с. ш. 24.686111° и. д.*ЧепелареЧепеларски Linux fest 2002Начало на Зимен сезон 2005/06Национални хайдушки празници „Капитан Петко Войвода“Град ЧепелареЧепеларе – народният ски курортbgrod.orgwww.terranatura.hit.bgСправка за населението на гр. Исперих, общ. Исперих, обл. РазградМузей на родопския карстМузей на спорта и скитеЧепеларебългарскибългарскианглийскитукИстория на градаСки писти в ЧепелареВремето в ЧепелареРадио и телевизия в ЧепелареЧепеларе мами с родопски чар и добри пистиЕвтин туризъм и снежни атракции в ЧепелареМестоположениеИнформация и снимки от музея на родопския карст3D панорами от ЧепелареЧепелареррр