Academic Research Study for Currently Employed, U.S. Engineers and Scientists

Thread Starter

Chad M

Joined Apr 16, 2017
0
All,

I am conducting an academic research study of engineers and scientists currently employed in the U.S. on workplace perceptions and behaviors. Participation consists of completing a single, anonymous survey, which at a moderate pace should take approximately 15 minutes. The survey can be accessed at the following link:

https://mcob.az1.qualtrics.com/jfe/form/SV_39up1gTWMMZgRGR

Thank you for your consideration. Any assistance is greatly appreciated.

Note: This attempt to collect responses is a part of a larger effort including two plants in Southeastern and Southwestern U.S., a technical center in Southeastern U.S., a two research and development centers in Northeastern U.S., and posting notices on various professional engineering boards (AAC, LinkedIn, etc).

V/r
Chad
 

tcmtech

Joined Nov 4, 2013
2,867
80 questions. Several questions repeat and none of it amounts to much more than more bureaucratic paperwork shuffling that will/would/could have any impact on how anyone is treated or acts in their jobs. :(
 

Thread Starter

Chad M

Joined Apr 16, 2017
0
Thank you for your feedback.

Again, this is research about workplace behaviors and perceptions which measure related, yet distinct constructs. It includes only established scales validated through previous, peer-reviewed empirical research. Questions may appear similar, but none of them are repeated.

V/r
Chad
 

WBahn

Joined Mar 31, 2012
26,272
Your survey results are guaranteed to be absolutely meaningless and you cannot draw any meaningful and/or defensible conclusion from them. If this is truly any kind of "academic" undertaking, then you should properly receive zero credit for any part of your project that involves this survey.

What you are doing is called an uncontrolled, self-selective survey. You have nothing beyond a vague notion of who is even seeing your survey, thus no basis for asserting that the respondents are representative of the population you are trying to find information about. Far worse, you have absolutely know idea what distinguishes the people that choose to participate from those that don't. The one thing that you can be very confident about is that the people that choose to respond are a tiny fraction of the people that could have responded, thus they are highly non-representative -- the very fact that they chose to respond sets them apart from the people that represent the information you are seeking, namely the people that chose NOT to respond!

Surveys like this represent negative knowledge. Before you do the survey you know that you don't know any meaningful answer to the question you are trying to address. After the survey you delude yourself into believing that you have learned something when, in fact, you haven't. Believing that you know something that is bogus is worse than knowing that you don't know something at all -- hence your true level of knowledge has been reduced.
 

Thread Starter

Chad M

Joined Apr 16, 2017
0
Any type of social science research has its limitations and bias, whether it is conducted via non-experimental means (e.g., a survey using random participation or via panels) where individuals self-select or through experimental means (e.g., laboratory experiments), which also suffers from self-selection. Some individuals will never participant in any research, any researcher understands this and no respected IRB would allow a researcher to make participation mandatory. Now I could pay for participation or use a sample of college sophomores with a promise of extra-credit, and publish in an A-star journal, but that would lead to a high occurrence of self-interested payoff-maximizers in the participant pool. Instead, I attempt to receive maximum participation through as many possible means as possible (as discussed in the initial post) in order to achieve a big enough sample size to maybe reach reasonable conclusions based on a priori knowledge and previous research.

I understand that social science research does not meet the rigor of the hard sciences, but certain phenomena cannot be measured directly, such as those prevalent in a small percentage of forum posters (e.g., cynicism, narcissism, Machiavellianism). Now at the end, I may know nothing more than the little information that I receive from those few who are willing to help me or foster the advancement of social science, but this is more information than I had when I begun. So if my research does not result in a theoretical breakthrough, which was never the aim, I would much rather conduct it than spend my time sniping on the Internet.

V/r
Chad
 

cmartinez

Joined Jan 17, 2007
7,172
I attempt to receive maximum participation through as many possible means as possible (as discussed in the initial post) in order to achieve a big enough sample size to maybe reach reasonable conclusions
No offense, but I think it's naïve to expect even remotely accurate data based on an anonymous survey done on untraceable people. You won't even know if the people answering your questions really are engineers or scientists... or even if they reside in the U.S.
 

Thread Starter

Chad M

Joined Apr 16, 2017
0
No offense taken. My university requires that surveys are done anonymously, which is pretty much the norm for most academic research. True, I will not know if the individuals taking the survey are truly engineers or scientists, or truly employed, or even human, but that is a limitation of self-report data and would be noted in any write-up of the data. As far as whether or not they are in the U.S., an IP GeoLoc is built into the survey system.

It is hard enough to get individuals to volunteer to take to survey, so I would think the chances that individuals are roaming the internet in search of surveys to take for the pure fun of it and enough of these global survey takers complete the same survey is probably not very high.
 

WBahn

Joined Mar 31, 2012
26,272
Any type of social science research has its limitations and bias, whether it is conducted via non-experimental means (e.g., a survey using random participation or via panels) where individuals self-select or through experimental means (e.g., laboratory experiments), which also suffers from self-selection. Some individuals will never participant in any research, any researcher understands this and no respected IRB would allow a researcher to make participation mandatory. Now I could pay for participation or use a sample of college sophomores with a promise of extra-credit, and publish in an A-star journal, but that would lead to a high occurrence of self-interested payoff-maximizers in the participant pool.
You are setting up a really poor strawman argument -- You are claiming that since any type of social science research has limitations, it is therefore acceptable to use any research methods, regardless of how fundamentally meaningless they happen to be.

Instead, I attempt to receive maximum participation through as many possible means as possible (as discussed in the initial post) in order to achieve a big enough sample size to maybe reach reasonable conclusions based on a priori knowledge and previous research.
Maximum participation is meaningless unless that participation is demonstrably relevant. Imagine that two people are trying to find whether teenage girls think it is a good idea for teenage girls to participate in sexting. One person goes to a local high school and asks fifty teenage girls this question and twenty of them respond with fifteen saying that it is not good for a teenage girl to participate in such conduct. The other person post an online survey is seen by one million people and one thousand respond with seven hundred saying that it is perfectly okay for teenage girls to participate. Both approaches have serious limitations, but does the fact that the online survey had fifty times the participation mean that it is the more credible of the two?

What if only one hundred of the online respondents were teenage girls and the rest were middle-aged males? In that case the second researcher would have collected data from five times as many teenage girls as the first researcher, so that does make the results of the online survey more liable? Or does the fact that ninety percent of the data collected came from a population that the study was not interested in studying make the data meaningless for that purpose?

I understand that social science research does not meet the rigor of the hard sciences, but certain phenomena cannot be measured directly, such as those prevalent in a small percentage of forum posters (e.g., cynicism, narcissism, Machiavellianism). Now at the end, I may know nothing more than the little information that I receive from those few who are willing to help me or foster the advancement of social science, but this is more information than I had when I begun.
And that is precisely why it will yield negative knowledge -- you are invested in the delusion that "the few that are willing to help" will let you know more about the issue you are studying when, in fact, you have zero basis for concluding that a single data point was even relevant and have every reason to suspect that the overwhelming majority of your data points are not at all relevant.

Going back to the example, does knowing that about well over half of middle age males think teenage girls should sext tell you anything about how teenage girls feel about it? Of course not! But if you did that second study, you would believe something that is almost certainly completely bogus -- that's LESS knowledge than you had before because before you at least accurately knew that you had no idea.

So if my research does not result in a theoretical breakthrough, which was never the aim, I would much rather conduct it than spend my time sniping on the Internet.

V/r
Chad
By all means, conduct your garbage research -- the mere fact that you are unwilling to reevaluate your methods when serious flaws are brought to your attention indicates that you have no interest in even attempting to perform defensible research. You want to take the easy way out and post an online survey and you don't care whether the results are valid in any way, shape or form -- it's an approach that let's you be lazy and that's all you care about. I can only hope that the people that are evaluating your work are not taken in by your lack of academic integrity.
 

WBahn

Joined Mar 31, 2012
26,272
No offense taken. My university requires that surveys are done anonymously, which is pretty much the norm for most academic research. True, I will not know if the individuals taking the survey are truly engineers or scientists, or truly employed, or even human, but that is a limitation of self-report data and would be noted in any write-up of the data. As far as whether or not they are in the U.S., an IP GeoLoc is built into the survey system.

It is hard enough to get individuals to volunteer to take to survey, so I would think the chances that individuals are roaming the internet in search of surveys to take for the pure fun of it and enough of these global survey takers complete the same survey is probably not very high.
It's not a question of people going out of their way to seek out surveys. It's a question of whether the bias that motivates someone to participate in a particular survey or not skew the data too much. Just ask yourself who is more likely to respond to your survey: someone that is satisfied with their workplace environment, or someone that is not? Who is more likely to respond, someone that thinks their workplace is fine, or someone that has filed ten grievances in the past six months?
 

Thread Starter

Chad M

Joined Apr 16, 2017
0
I understand that certain demographics are more likely to take a survey than others. This survey is not applicable to anyone's specific organization, so whether or not one is satisfied of not with their current organization should not overly skew the data. They are not punishing their own organization or getting back at their bosses by answering this survey. Further, forcing someone to participate in a study when they otherwise would not will also result in unreliable data. I am open to utilizing other approaches and if you have suggestions for methods other than self-reports to capture an individual's own perceptions, please share.
 

WBahn

Joined Mar 31, 2012
26,272
It has nothing to do with whether someone is trying to somehow punish their organization or not. It has simply to do with people being far, far more likely to participate in a survey if the topic of the survey is of particular relevance to them coupled with the fact that you have no way to estimate either what fraction of the potential respondents view your survey as particularly relevant to them, let along why they view it that way.

Online surveys are junk. Period. They should never be used for anything other than entertainment (and, personally, the only entertaining aspect of them is how they are almost always vastly at odds with decent surveys on the same subject).

You need to devise a means of identifying members of the population you are trying to sample and then you need to devise a means of sampling that population is an acceptably random way. There are many respectable way to do this, depending on your budget and timescale. One of the most common is to dial random phone numbers in the geographic regions of interest. Provided that your study is such that it is defensible to assume that the group of people that will not participate in any telephone survey are independent of the what their answers would have been to your survey (and this is usually, but not always, a defensible assumption) then you ask qualifying questions to determine if the person is likely to fall into a grouping that should be excluded or that should be weighted differently -- these are not easy tasks and are far from perfect, but they are usually done.

It has been shown repeatedly that people that agree to participate in a telephone survey overwhelmingly give honest answers to a live surveyor; it has also been shown that people are far more likely to enter false information on an online survey. It has been suggested that there are several factors at play with the two strongest being people are naturally less inclined to lie to a stranger about things for which lying provides no benefit, and also that the dynamics of a live conversation encourage honesty simply because in a live conversation you are on the spot to give an answer immediately and telling the truth is much, much easier than making something up, while in a self-paced online survey people tend to be much more likely to take it with the intention of providing misleading information (when you call someone they either have to take it right then or not at all, whereas an online survey usually allows people to take it at their convenience, which often translates into taking it when they are in a mood to mess with the results) and it is much easier to decide what lies to tell since you are not under any time pressure to provide your responses.

Then you need to track your participation rates. Below some threshold, the self-selection bias will simply be too dominant to yield any defensible conclusions, but if the sampling method is strong enough, then quality results can be teased out even if the response rate is otherwise low -- if nothing else, confidence bounds can be estimated.
 

Thread Starter

Chad M

Joined Apr 16, 2017
0
I can agree to disagree as either approach has its advantages and disadvantages.

Mobile phone-only use, generational differences (evident in this thread), and declining response rates (e.g., Brick and Williams 2013) are big disadvantages to telephone surveys. Research has also found that the increase in telemarketers and fraud has caused decreased participation by some groups (e.g., the most affluent, households with older children). Response rates have been shown to be as low as 7% in telephone surveys, not very good for generalizability of the data. Furthermore, multiple studies (e.g., Holbrook et al., 2003, Kreuter et al., 2008, Zhang et al., 2017) have shown that social desirability responses tends to be highest for telephone surveys and lowest for online surveys.
 

WBahn

Joined Mar 31, 2012
26,272
We can definitely agree to disagree. The only remaining thing I would suggest for consideration, if a response rate of only 7% in a telephone survey is not very good for generalizability of the data, then how generalizable can the data from an online survey possibly be when the response rate isn't even known, but is likely only a small fraction of 1%?
 

jgessling

Joined Jul 31, 2009
82
If you put that survey on Amazon Mechanical Turk I would be happy to take it for $1 or so. There are lots if similar things on offer there. But since I'm retired I don't qualify anyway. Good luck in your research.
 
Top