I know this may sound a bit unusual but I have seen it used with some success.
I don't think it is realistic to expect to take advantage of all 16 bits but you may be able to take advantage of say 10 bits.
If you take the signal you are measuring and input it into one of the ADC inputs and then take the same signal and pass it through an opamp with a precision gain of 4 and then take that output and feed it to the second ADC and then when you do simultaneous converts from the two ADC, you will need to concatenate the 8 bits from the ADC that is converting the unamplified signal with the 2 least significant bits from the second ADC. You will need to arrange for some sort of input clamp on the second ADC's input so that the amplified input does not exceed the ADC's input range. The 8-bits will form the high-order value portion of the 10-bit result and the 2-bits will form the low order portion of the result.
This approach will adversely affect the signal to noise but it is a way to squeeze some added resolution.
How does that help?
BTW how would you arrange simultaneous conversions?
It doesn't seem reasonable to measure a signal at time zero and the same signal some time later. What you have is two conversions at different resolutions. If you worked for me and you made this proposal I think I'd put you on probation, and I'd promote the guy who came up with a onechip solution. Why screw around with crap like this?
The 8 bit ADC's would have to be scaled along with the voltage out of the divider. The conversions would be identical. It's not possible to chain two 8 bit ADC's to make 9, 10, 12, or 16 bit conversions.