Interesting question. The answer (IMO) is that it depends on what you are trying to accomplish.
Just so we are clear, I test Web browsers, not phone apps. But I think this doesn't matter to the question.
On one hand, starting from a known (clean) state for every test will allow you to run through a bunch of tests, over and over, testing specific functionality of the app under test. If you organize your runs in a smart way, you can test functionality from one test to another, building on the success of what you already know works. this allows for smaller, specific tests AND when something fails, it is probably easier to say what is working and what is not working in your app. It's probably more stable.
On the other hand, it wastes more time as you have to constantly re-run code over and over again to launch your app and navigate to where you want to be in your app, to get to the state of the start of the next test. Also, who really uses your app in the real world by starting it, doing one thing, stopping, restarting, etc...
It also depends on your testing environment setup. For example, how many computers (real or virtual, virtual likely) you have available to run tests, how many Ranorex licenses you have, etc... If you can only run one test at a time on one config at a time, then time might be something you don't want to waste. But if you have the resources to do many things at the same time, and to kick it off over night, then perhaps time is not an issue.
For me and what I test, I do clean starts and mostly individual tests that test specific functionality. It definitely wastes about 1 minute or so between tests to shut down the current test and then start the next, and then maybe a few seconds or minutes to navigate back to a state I need to be in, but I'd rather have 10 tests that test 10 specific things and look at the results for those 10 tests and know that 10 things work, then have 1 test say the same. Mostly for when things fail, this is important to me. If tests 1-6 pass, and then test 7 fails (and depending on if tests 8-10 are dependent on the run of 7 or not) I can quickly look at the results and know what is not working. If this is all 1 test, the result doesn't tell me anything, and I have to look at report/log files to figure out exactly what doesn't work. Then I have to re-run the entire thing again, which wastes time. Maybe test 8-10 also need to be re-run or not. I also have a whole bunch of virtual machines and servers that I manage for all this, and enough licenses that I can run many things at once, and a process to do this all with the click of a few buttons and overnight.
But I also often throw in a super test, to mimic more of a real world test. This test would perhaps test everything in one big test, stringing together elements of previously tested functionality. Most of the test is a repeat of what has already been tested, but it takes much longer and does much more in one flow.
Anyway, those are my thoughts.