While I see the value in that and it's a useful and appreciated suggestion, I don't know that it helps me a whole lot.Here is a suggestion for you, @WBahn. On your 14.04 box, install Virtualbox (sudo apt-get install virtualbox). Then, create a virtual machine with 16.04.3. This way, you can test a Kinetic install and ensure it works properly prior to upgrading your system.
It seems to be telling me that before I upgrade a Linux box to the next OS release, that I really need to test every piece of software I am running on my present version on a virtual machine with the new version to see if it still works. Is that even practical? Am I likely to even remember every program I've installed on a given machine? And how much time is required to do all of that installation and testing? And then I have to do all of the installation all over again after I do the actual upgrade.
Does the Linux community have absolutely no sense of backwards compatibility?
There's definitely something that I'm missing from my understanding of the model here.
For instance, just to use the software that we've mentioned here, but really only as generic name placeholders, let's say I build an application that uses ROS Indigo and runs on 14.04. Everything is wonderful. Now 16.04 comes out and let's say that the person/team that wrote ROS Indigo has not updated it to run on 16.04 yet. Maybe they will update it next week, maybe they never will because they've moved on to something else or gone out of business or were all killed in an airplane crash. That means that I can't upgrade my system to 16.04 because my application isn't going to work because ROS Indigo won't run on 16.04. But that means that no one that uses MY application can upgrade to 16.04, either. Yet many of them probably want/need to use new applications that won't run on 14.04 because they were initially developed for 16.04.
While I won't argue that Linux is far superior to Windows in many, many ways, I have to admit that I just don't run into this problem on Windows machines. I am running some programs on my Win7 box that are 25 year old Win3.1 programs and even a couple programs from the DOS days. In general, I expect a program written for a version of Windows to run fine on any later version and I am almost never disappointed. I have some friends that are up in arms because they like to write assembly language programs for DOS and the latest versions of Windows finally stopped supporting them natively and so they've had to find tools to run them virtually. I tell them that it's pretty unreasonable to expect an OS to be backwards compatible for three decades, but I certainly don't think it's unreasonable to expect it to be backwards compatible from one version to the next.
It's the same with Python. You can't just use code written for Python 2 and Python 3 together because a lot of code that's written for Python 3 won't run under Python 2 (which is expected), but a lot of code written for Python 2 won't run under Python 3! Yet the Python folks always go on and on about how Python is all about code reuse. Seems more like it's all about continual code rewriting as the next version breaks a big chunk of everything you've already written.
So, seriously, what am I missing here. Surely this isn't the way things in the Linux world are supposed to work.