What's the point of the test set? Announcing the arrival of Valued Associate #679: Cesar Manara Planned maintenance scheduled April 23, 2019 at 23:30UTC (7:30pm US/Eastern) 2019 Moderator Election Q&A - Questionnaire 2019 Community Moderator Election ResultsPre-processing (center, scale, impute) among training sets (different forms) and the test set - what is a good approach?Machine learning for Point Clouds Lidar dataHow to model user's buying behavior on Amazon?With unbalanced class, do I have to use under sampling on my validation/testing datasets?What's the best way to rank aggregate imdb rating data?How can l get 50 % examples in training set and 50% in test set for each class when splitting data?What is the appropriate name for this dataset?Sub-sampling so that sample statistics match population statisticsData set descriptions for frequent item-set mining data sethow to check the distribution of the training set and testing set are similar

Putting class ranking in CV, but against dept guidelines

How much damage would a cupful of neutron star matter do to the Earth?

The Nth Gryphon Number

How does the math work when buying airline miles?

How do living politicians protect their readily obtainable signatures from misuse?

C's equality operator on converted pointers

If Windows 7 doesn't support WSL, then what is "Subsystem for UNIX-based Applications"?

Co-worker has annoying ringtone

Why can't I install Tomboy in Ubuntu Mate 19.04?

Central Vacuuming: Is it worth it, and how does it compare to normal vacuuming?

In musical terms, what properties are varied by the human voice to produce different words / syllables?

What is the difference between a "ranged attack" and a "ranged weapon attack"?

Random body shuffle every night—can we still function?

Deconstruction is ambiguous

A letter with no particular backstory

Would it be easier to apply for a UK visa if there is a host family to sponsor for you in going there?

Is there public access to the Meteor Crater in Arizona?

How to pronounce 伝統色

How to report t statistic from R

A term for a woman complaining about things/begging in a cute/childish way

Google .dev domain strangely redirects to https

Did Mueller's report provide an evidentiary basis for the claim of Russian govt election interference via social media?

Can a Beast Master ranger change beast companions?

Does "shooting for effect" have contradictory meanings in different areas?



What's the point of the test set?



Announcing the arrival of Valued Associate #679: Cesar Manara
Planned maintenance scheduled April 23, 2019 at 23:30UTC (7:30pm US/Eastern)
2019 Moderator Election Q&A - Questionnaire
2019 Community Moderator Election ResultsPre-processing (center, scale, impute) among training sets (different forms) and the test set - what is a good approach?Machine learning for Point Clouds Lidar dataHow to model user's buying behavior on Amazon?With unbalanced class, do I have to use under sampling on my validation/testing datasets?What's the best way to rank aggregate imdb rating data?How can l get 50 % examples in training set and 50% in test set for each class when splitting data?What is the appropriate name for this dataset?Sub-sampling so that sample statistics match population statisticsData set descriptions for frequent item-set mining data sethow to check the distribution of the training set and testing set are similar










1












$begingroup$


I get the point of a validation and training set, but the importance of a test set doesn't click for me.



Let's say you train a model and you try your best to avoid overfitting by testing your model on the validation set.



After you've decided you have a model your proud of, you do a final sanity check on the test set, let's say the performance is trash. Are you really going to start all over? What decision making does it inform? In my workplace, the way timelines are structured, there's no time to start over.










share|improve this question







New contributor




Nick Corona is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.







$endgroup$











  • $begingroup$
    The test set is so that you don't cheat.
    $endgroup$
    – Stephen Rauch
    2 hours ago















1












$begingroup$


I get the point of a validation and training set, but the importance of a test set doesn't click for me.



Let's say you train a model and you try your best to avoid overfitting by testing your model on the validation set.



After you've decided you have a model your proud of, you do a final sanity check on the test set, let's say the performance is trash. Are you really going to start all over? What decision making does it inform? In my workplace, the way timelines are structured, there's no time to start over.










share|improve this question







New contributor




Nick Corona is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.







$endgroup$











  • $begingroup$
    The test set is so that you don't cheat.
    $endgroup$
    – Stephen Rauch
    2 hours ago













1












1








1





$begingroup$


I get the point of a validation and training set, but the importance of a test set doesn't click for me.



Let's say you train a model and you try your best to avoid overfitting by testing your model on the validation set.



After you've decided you have a model your proud of, you do a final sanity check on the test set, let's say the performance is trash. Are you really going to start all over? What decision making does it inform? In my workplace, the way timelines are structured, there's no time to start over.










share|improve this question







New contributor




Nick Corona is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.







$endgroup$




I get the point of a validation and training set, but the importance of a test set doesn't click for me.



Let's say you train a model and you try your best to avoid overfitting by testing your model on the validation set.



After you've decided you have a model your proud of, you do a final sanity check on the test set, let's say the performance is trash. Are you really going to start all over? What decision making does it inform? In my workplace, the way timelines are structured, there's no time to start over.







dataset






share|improve this question







New contributor




Nick Corona is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.











share|improve this question







New contributor




Nick Corona is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.









share|improve this question




share|improve this question






New contributor




Nick Corona is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.









asked 3 hours ago









Nick CoronaNick Corona

61




61




New contributor




Nick Corona is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.





New contributor





Nick Corona is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.






Nick Corona is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.











  • $begingroup$
    The test set is so that you don't cheat.
    $endgroup$
    – Stephen Rauch
    2 hours ago
















  • $begingroup$
    The test set is so that you don't cheat.
    $endgroup$
    – Stephen Rauch
    2 hours ago















$begingroup$
The test set is so that you don't cheat.
$endgroup$
– Stephen Rauch
2 hours ago




$begingroup$
The test set is so that you don't cheat.
$endgroup$
– Stephen Rauch
2 hours ago










2 Answers
2






active

oldest

votes


















2












$begingroup$

The point of a test set is to give you a final, unbiased performance measure of your entire model building process. This includes all modelling decisions in your pipeline, so any preprocessing, algorithm selection, feature engineering, feature selection, hyper parameter tuning and how you trained your model in general (5 fold? Bootstrapping? etc.). All of these decisions can lead to overfitting; for instance, selecting a set of hyperparameters that are coincidentally optimal for a particular validation set but not for the general population. If we have no test set you would not be able to identify this and would potentially be reporting highly optimistic scores.



Also, because the above modelling pipeline can get very complex, the possibility of leaking data and overfitting becomes very high. If you tune to your validation set, how will you know if your entire modelling process is not leaking data (and therefore overfitting?)



You bring up a good point; of course if we see that the test set score is poor then we will probably go back and tweak again. Thus, this just demotes the test set into a validation one if you use it too many times as you now run into the possibility of overfitting the test set (see almost every Kaggle competition). However, through repeated test set evaluation (train the model, then test it, then repeat with a different partioning) you will at least get a gauge on how variable your model is to help mitigate this problem. The amount of times you repeat will depend on how much the test set scores vary and how much uncertainty you are willing to accept (also time constraints).



In my opinion, in the business setting you should always make time to properly test your model. The dangers of overfitting are way too high and even worse; you would not even know it. If the test set scores end up being "trash" then at least you know the model is trash and you don't use it and/or you change your approach. This is way better than thinking the model is fantastic based off non rigorous validation and then having the model fail in production. The scientific method is there for a reason right?






share|improve this answer










New contributor




aranglol is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.






$endgroup$




















    0












    $begingroup$

    I like your question, it is somewhat philosophical in nature.



    We know that a test set should not affect the model, otherwise it acts as a validation set. Therefore, even if there is enough time, if we act on a bad test result and change the model, the test set becomes a validation set, although, it is not as involved as a validation set that is used for early stopping or parameter tuning.



    In other words, a test set must be useless just the way you have described it! The moment it is useful, it becomes a validation set. Although, to be more precise, a test set is not THAT useless because it probably lowers your (and your boss's) expectation about the later performance of the model in production, so lower risk of heart failure there.



    As an example, in a Kaggle competition, the final set is a "test set" since it does not affect the submitted models, however as soon as the final leaderboard is announced, that test set becomes a validation set; e.g., it affects which algorithms we later choose, i.e. those of top competitors.



    In summary, it seems that most of the time we are using less-involved validation sets to double check more-involved validation sets.



    P.S.: as of writing this answer, @aranglol came up with similar notes and examples :) (+1)






    share|improve this answer









    $endgroup$












    • $begingroup$
      Do you think that repeated cross validation would solve this issue of overfitting a particular static test set? I feel that on Kaggle no one does this because it is computationally expensive and models take a while to train. However, in practical usage getting multiple estimates and then forming say, a bootstrapped confidence interval seems to make a lot of intuitive sense with respect to this problem.
      $endgroup$
      – aranglol
      45 mins ago












    Your Answer








    StackExchange.ready(function()
    var channelOptions =
    tags: "".split(" "),
    id: "557"
    ;
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function()
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled)
    StackExchange.using("snippets", function()
    createEditor();
    );

    else
    createEditor();

    );

    function createEditor()
    StackExchange.prepareEditor(
    heartbeatType: 'answer',
    autoActivateHeartbeat: false,
    convertImagesToLinks: false,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: null,
    bindNavPrevention: true,
    postfix: "",
    imageUploader:
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    ,
    onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    );



    );






    Nick Corona is a new contributor. Be nice, and check out our Code of Conduct.









    draft saved

    draft discarded


















    StackExchange.ready(
    function ()
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f49612%2fwhats-the-point-of-the-test-set%23new-answer', 'question_page');

    );

    Post as a guest















    Required, but never shown

























    2 Answers
    2






    active

    oldest

    votes








    2 Answers
    2






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes









    2












    $begingroup$

    The point of a test set is to give you a final, unbiased performance measure of your entire model building process. This includes all modelling decisions in your pipeline, so any preprocessing, algorithm selection, feature engineering, feature selection, hyper parameter tuning and how you trained your model in general (5 fold? Bootstrapping? etc.). All of these decisions can lead to overfitting; for instance, selecting a set of hyperparameters that are coincidentally optimal for a particular validation set but not for the general population. If we have no test set you would not be able to identify this and would potentially be reporting highly optimistic scores.



    Also, because the above modelling pipeline can get very complex, the possibility of leaking data and overfitting becomes very high. If you tune to your validation set, how will you know if your entire modelling process is not leaking data (and therefore overfitting?)



    You bring up a good point; of course if we see that the test set score is poor then we will probably go back and tweak again. Thus, this just demotes the test set into a validation one if you use it too many times as you now run into the possibility of overfitting the test set (see almost every Kaggle competition). However, through repeated test set evaluation (train the model, then test it, then repeat with a different partioning) you will at least get a gauge on how variable your model is to help mitigate this problem. The amount of times you repeat will depend on how much the test set scores vary and how much uncertainty you are willing to accept (also time constraints).



    In my opinion, in the business setting you should always make time to properly test your model. The dangers of overfitting are way too high and even worse; you would not even know it. If the test set scores end up being "trash" then at least you know the model is trash and you don't use it and/or you change your approach. This is way better than thinking the model is fantastic based off non rigorous validation and then having the model fail in production. The scientific method is there for a reason right?






    share|improve this answer










    New contributor




    aranglol is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
    Check out our Code of Conduct.






    $endgroup$

















      2












      $begingroup$

      The point of a test set is to give you a final, unbiased performance measure of your entire model building process. This includes all modelling decisions in your pipeline, so any preprocessing, algorithm selection, feature engineering, feature selection, hyper parameter tuning and how you trained your model in general (5 fold? Bootstrapping? etc.). All of these decisions can lead to overfitting; for instance, selecting a set of hyperparameters that are coincidentally optimal for a particular validation set but not for the general population. If we have no test set you would not be able to identify this and would potentially be reporting highly optimistic scores.



      Also, because the above modelling pipeline can get very complex, the possibility of leaking data and overfitting becomes very high. If you tune to your validation set, how will you know if your entire modelling process is not leaking data (and therefore overfitting?)



      You bring up a good point; of course if we see that the test set score is poor then we will probably go back and tweak again. Thus, this just demotes the test set into a validation one if you use it too many times as you now run into the possibility of overfitting the test set (see almost every Kaggle competition). However, through repeated test set evaluation (train the model, then test it, then repeat with a different partioning) you will at least get a gauge on how variable your model is to help mitigate this problem. The amount of times you repeat will depend on how much the test set scores vary and how much uncertainty you are willing to accept (also time constraints).



      In my opinion, in the business setting you should always make time to properly test your model. The dangers of overfitting are way too high and even worse; you would not even know it. If the test set scores end up being "trash" then at least you know the model is trash and you don't use it and/or you change your approach. This is way better than thinking the model is fantastic based off non rigorous validation and then having the model fail in production. The scientific method is there for a reason right?






      share|improve this answer










      New contributor




      aranglol is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.






      $endgroup$















        2












        2








        2





        $begingroup$

        The point of a test set is to give you a final, unbiased performance measure of your entire model building process. This includes all modelling decisions in your pipeline, so any preprocessing, algorithm selection, feature engineering, feature selection, hyper parameter tuning and how you trained your model in general (5 fold? Bootstrapping? etc.). All of these decisions can lead to overfitting; for instance, selecting a set of hyperparameters that are coincidentally optimal for a particular validation set but not for the general population. If we have no test set you would not be able to identify this and would potentially be reporting highly optimistic scores.



        Also, because the above modelling pipeline can get very complex, the possibility of leaking data and overfitting becomes very high. If you tune to your validation set, how will you know if your entire modelling process is not leaking data (and therefore overfitting?)



        You bring up a good point; of course if we see that the test set score is poor then we will probably go back and tweak again. Thus, this just demotes the test set into a validation one if you use it too many times as you now run into the possibility of overfitting the test set (see almost every Kaggle competition). However, through repeated test set evaluation (train the model, then test it, then repeat with a different partioning) you will at least get a gauge on how variable your model is to help mitigate this problem. The amount of times you repeat will depend on how much the test set scores vary and how much uncertainty you are willing to accept (also time constraints).



        In my opinion, in the business setting you should always make time to properly test your model. The dangers of overfitting are way too high and even worse; you would not even know it. If the test set scores end up being "trash" then at least you know the model is trash and you don't use it and/or you change your approach. This is way better than thinking the model is fantastic based off non rigorous validation and then having the model fail in production. The scientific method is there for a reason right?






        share|improve this answer










        New contributor




        aranglol is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
        Check out our Code of Conduct.






        $endgroup$



        The point of a test set is to give you a final, unbiased performance measure of your entire model building process. This includes all modelling decisions in your pipeline, so any preprocessing, algorithm selection, feature engineering, feature selection, hyper parameter tuning and how you trained your model in general (5 fold? Bootstrapping? etc.). All of these decisions can lead to overfitting; for instance, selecting a set of hyperparameters that are coincidentally optimal for a particular validation set but not for the general population. If we have no test set you would not be able to identify this and would potentially be reporting highly optimistic scores.



        Also, because the above modelling pipeline can get very complex, the possibility of leaking data and overfitting becomes very high. If you tune to your validation set, how will you know if your entire modelling process is not leaking data (and therefore overfitting?)



        You bring up a good point; of course if we see that the test set score is poor then we will probably go back and tweak again. Thus, this just demotes the test set into a validation one if you use it too many times as you now run into the possibility of overfitting the test set (see almost every Kaggle competition). However, through repeated test set evaluation (train the model, then test it, then repeat with a different partioning) you will at least get a gauge on how variable your model is to help mitigate this problem. The amount of times you repeat will depend on how much the test set scores vary and how much uncertainty you are willing to accept (also time constraints).



        In my opinion, in the business setting you should always make time to properly test your model. The dangers of overfitting are way too high and even worse; you would not even know it. If the test set scores end up being "trash" then at least you know the model is trash and you don't use it and/or you change your approach. This is way better than thinking the model is fantastic based off non rigorous validation and then having the model fail in production. The scientific method is there for a reason right?







        share|improve this answer










        New contributor




        aranglol is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
        Check out our Code of Conduct.









        share|improve this answer



        share|improve this answer








        edited 2 hours ago





















        New contributor




        aranglol is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
        Check out our Code of Conduct.









        answered 2 hours ago









        aranglolaranglol

        1312




        1312




        New contributor




        aranglol is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
        Check out our Code of Conduct.





        New contributor





        aranglol is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
        Check out our Code of Conduct.






        aranglol is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
        Check out our Code of Conduct.





















            0












            $begingroup$

            I like your question, it is somewhat philosophical in nature.



            We know that a test set should not affect the model, otherwise it acts as a validation set. Therefore, even if there is enough time, if we act on a bad test result and change the model, the test set becomes a validation set, although, it is not as involved as a validation set that is used for early stopping or parameter tuning.



            In other words, a test set must be useless just the way you have described it! The moment it is useful, it becomes a validation set. Although, to be more precise, a test set is not THAT useless because it probably lowers your (and your boss's) expectation about the later performance of the model in production, so lower risk of heart failure there.



            As an example, in a Kaggle competition, the final set is a "test set" since it does not affect the submitted models, however as soon as the final leaderboard is announced, that test set becomes a validation set; e.g., it affects which algorithms we later choose, i.e. those of top competitors.



            In summary, it seems that most of the time we are using less-involved validation sets to double check more-involved validation sets.



            P.S.: as of writing this answer, @aranglol came up with similar notes and examples :) (+1)






            share|improve this answer









            $endgroup$












            • $begingroup$
              Do you think that repeated cross validation would solve this issue of overfitting a particular static test set? I feel that on Kaggle no one does this because it is computationally expensive and models take a while to train. However, in practical usage getting multiple estimates and then forming say, a bootstrapped confidence interval seems to make a lot of intuitive sense with respect to this problem.
              $endgroup$
              – aranglol
              45 mins ago
















            0












            $begingroup$

            I like your question, it is somewhat philosophical in nature.



            We know that a test set should not affect the model, otherwise it acts as a validation set. Therefore, even if there is enough time, if we act on a bad test result and change the model, the test set becomes a validation set, although, it is not as involved as a validation set that is used for early stopping or parameter tuning.



            In other words, a test set must be useless just the way you have described it! The moment it is useful, it becomes a validation set. Although, to be more precise, a test set is not THAT useless because it probably lowers your (and your boss's) expectation about the later performance of the model in production, so lower risk of heart failure there.



            As an example, in a Kaggle competition, the final set is a "test set" since it does not affect the submitted models, however as soon as the final leaderboard is announced, that test set becomes a validation set; e.g., it affects which algorithms we later choose, i.e. those of top competitors.



            In summary, it seems that most of the time we are using less-involved validation sets to double check more-involved validation sets.



            P.S.: as of writing this answer, @aranglol came up with similar notes and examples :) (+1)






            share|improve this answer









            $endgroup$












            • $begingroup$
              Do you think that repeated cross validation would solve this issue of overfitting a particular static test set? I feel that on Kaggle no one does this because it is computationally expensive and models take a while to train. However, in practical usage getting multiple estimates and then forming say, a bootstrapped confidence interval seems to make a lot of intuitive sense with respect to this problem.
              $endgroup$
              – aranglol
              45 mins ago














            0












            0








            0





            $begingroup$

            I like your question, it is somewhat philosophical in nature.



            We know that a test set should not affect the model, otherwise it acts as a validation set. Therefore, even if there is enough time, if we act on a bad test result and change the model, the test set becomes a validation set, although, it is not as involved as a validation set that is used for early stopping or parameter tuning.



            In other words, a test set must be useless just the way you have described it! The moment it is useful, it becomes a validation set. Although, to be more precise, a test set is not THAT useless because it probably lowers your (and your boss's) expectation about the later performance of the model in production, so lower risk of heart failure there.



            As an example, in a Kaggle competition, the final set is a "test set" since it does not affect the submitted models, however as soon as the final leaderboard is announced, that test set becomes a validation set; e.g., it affects which algorithms we later choose, i.e. those of top competitors.



            In summary, it seems that most of the time we are using less-involved validation sets to double check more-involved validation sets.



            P.S.: as of writing this answer, @aranglol came up with similar notes and examples :) (+1)






            share|improve this answer









            $endgroup$



            I like your question, it is somewhat philosophical in nature.



            We know that a test set should not affect the model, otherwise it acts as a validation set. Therefore, even if there is enough time, if we act on a bad test result and change the model, the test set becomes a validation set, although, it is not as involved as a validation set that is used for early stopping or parameter tuning.



            In other words, a test set must be useless just the way you have described it! The moment it is useful, it becomes a validation set. Although, to be more precise, a test set is not THAT useless because it probably lowers your (and your boss's) expectation about the later performance of the model in production, so lower risk of heart failure there.



            As an example, in a Kaggle competition, the final set is a "test set" since it does not affect the submitted models, however as soon as the final leaderboard is announced, that test set becomes a validation set; e.g., it affects which algorithms we later choose, i.e. those of top competitors.



            In summary, it seems that most of the time we are using less-involved validation sets to double check more-involved validation sets.



            P.S.: as of writing this answer, @aranglol came up with similar notes and examples :) (+1)







            share|improve this answer












            share|improve this answer



            share|improve this answer










            answered 1 hour ago









            EsmailianEsmailian

            3,476420




            3,476420











            • $begingroup$
              Do you think that repeated cross validation would solve this issue of overfitting a particular static test set? I feel that on Kaggle no one does this because it is computationally expensive and models take a while to train. However, in practical usage getting multiple estimates and then forming say, a bootstrapped confidence interval seems to make a lot of intuitive sense with respect to this problem.
              $endgroup$
              – aranglol
              45 mins ago

















            • $begingroup$
              Do you think that repeated cross validation would solve this issue of overfitting a particular static test set? I feel that on Kaggle no one does this because it is computationally expensive and models take a while to train. However, in practical usage getting multiple estimates and then forming say, a bootstrapped confidence interval seems to make a lot of intuitive sense with respect to this problem.
              $endgroup$
              – aranglol
              45 mins ago
















            $begingroup$
            Do you think that repeated cross validation would solve this issue of overfitting a particular static test set? I feel that on Kaggle no one does this because it is computationally expensive and models take a while to train. However, in practical usage getting multiple estimates and then forming say, a bootstrapped confidence interval seems to make a lot of intuitive sense with respect to this problem.
            $endgroup$
            – aranglol
            45 mins ago





            $begingroup$
            Do you think that repeated cross validation would solve this issue of overfitting a particular static test set? I feel that on Kaggle no one does this because it is computationally expensive and models take a while to train. However, in practical usage getting multiple estimates and then forming say, a bootstrapped confidence interval seems to make a lot of intuitive sense with respect to this problem.
            $endgroup$
            – aranglol
            45 mins ago











            Nick Corona is a new contributor. Be nice, and check out our Code of Conduct.









            draft saved

            draft discarded


















            Nick Corona is a new contributor. Be nice, and check out our Code of Conduct.












            Nick Corona is a new contributor. Be nice, and check out our Code of Conduct.











            Nick Corona is a new contributor. Be nice, and check out our Code of Conduct.














            Thanks for contributing an answer to Data Science Stack Exchange!


            • Please be sure to answer the question. Provide details and share your research!

            But avoid


            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.

            Use MathJax to format equations. MathJax reference.


            To learn more, see our tips on writing great answers.




            draft saved


            draft discarded














            StackExchange.ready(
            function ()
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f49612%2fwhats-the-point-of-the-test-set%23new-answer', 'question_page');

            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown







            Popular posts from this blog

            Möglingen Índice Localización Historia Demografía Referencias Enlaces externos Menú de navegación48°53′18″N 9°07′45″E / 48.888333333333, 9.129166666666748°53′18″N 9°07′45″E / 48.888333333333, 9.1291666666667Sitio web oficial Mapa de Möglingen«Gemeinden in Deutschland nach Fläche, Bevölkerung und Postleitzahl am 30.09.2016»Möglingen

            Virtualbox - Configuration error: Querying “UUID” failed (VERR_CFGM_VALUE_NOT_FOUND)“VERR_SUPLIB_WORLD_WRITABLE” error when trying to installing OS in virtualboxVirtual Box Kernel errorFailed to open a seesion for the virtual machineFailed to open a session for the virtual machineUbuntu 14.04 LTS Virtualbox errorcan't use VM VirtualBoxusing virtualboxI can't run Linux-64 Bit on VirtualBoxUnable to insert the virtual optical disk (VBoxguestaddition) in virtual machine for ubuntu server in win 10VirtuaBox in Ubuntu 18.04 Issues with Win10.ISO Installation

            Antonio De Lisio Carrera Referencias Menú de navegación«Caracas: evolución relacional multipleja»«Cuando los gobiernos subestiman a las localidades: L a Iniciativa para la Integración de la Infraestructura Regional Suramericana (IIRSA) en la frontera Colombo-Venezolana»«Maestría en Planificación Integral del Ambiente»«La Metrópoli Caraqueña: Expansión Simplificadora o Articulación Diversificante»«La Metrópoli Caraqueña: Expansión Simplificadora o Articulación Diversificante»«Conózcanos»«Caracas: evolución relacional multipleja»«La Metrópoli Caraqueña: Expansión Simplificadora o Articulación Diversificante»