# Questionnaire Bachelor Thesis (Topic: Online Communities)

#### Luca Wienert

Joined Apr 9, 2019
6
To everybody familiar with Coding: I NEED YOUR SUPPORT. I am currently writing my bachelor thesis on the spread of software solutions through online communication (University: Fontys International Business School, Venlo). I now need to conduct a survey and YOU are my only chance to get viable data.

By leaving your contact info you get the chance to win Amazon gift cards (2x50€/$, 4x25€/$).

Approximated completion time: If you’re quick you can make it in 10 minutes.

Here’s the link to start the survey: https://www.surveymonkey.com/r/SFPTRKY

If you have any questions, feel free to comment below or send me a personal message.

#### jpanhalt

Joined Jan 18, 2008
11,088
Will your results mean anything from a bunch of mostly anonymous responders with mostly unknown levels of expertise?

Maybe a supermarket checkout line would be just as valuable.

For the survey per se:

You need directions for each question or group of questions. For example, for the first two questions, should one mark all that apply or only the most important? Should one rank in order of importance? How are you going to include "other" responses in your results summary? What statistical weight will you give to each, including other?

For yes/no questions, you need a "chose not to respond" or similar option. In my case, a forced yes/no without a no response, results in my rapid exit from the survey.

I didn't get past the first three. You also need to track those who drop out of the survey. If only 5% of the people who start the survey finish it, then you have to wonder why.

Finally, an offer to pay may attract only those who are more desperate for money and bias your results.

#### Luca Wienert

Joined Apr 9, 2019
6
Will your results mean anything from a bunch of mostly anonymous responders with mostly unknown levels of expertise?

Maybe a supermarket checkout line would be just as valuable.

For the survey per se:

You need directions for each question or group of questions. For example, for the first two questions, should one mark all that apply or only the most important? Should one rank in order of importance? How are you going to include "other" responses in your results summary? What statistical weight will you give to each, including other?

For yes/no questions, you need a "chose not to respond" or similar option. In my case, a forced yes/no without a no response, results in my rapid exit from the survey.

I didn't get past the first three. You also need to track those who drop out of the survey. If only 5% of the people who start the survey finish it, then you have to wonder why.

Finally, an offer to pay may attract only those who are more desperate for money and bias your results.
Thanks for the partly constructive feedback I've considered and improved the aspects you mentioned.

You're right concerning the description of my questions. I should be more precise on what people should answer. And maybe I should add a chose not to respond, true.

Concerning your header question: I need to get responses from people that potentially program microcontrollers. I can hardly think of other media than online forums to reach out to these people.

#### jpanhalt

Joined Jan 18, 2008
11,088
Concerning your header question: I need to get responses from people that potentially program microcontrollers. I can hardly think of other media than online forums to reach out to these people.
Thank you for your partly constructive response.

How do you define those who "potentially" might program an MCU. That includes a huge crowd who have never done it.

Have you considered manufacturer-sponsored forums? Second, your lead-in wasn't at all clear that you were asking about MCU's specifically. Are you intending to exclude those who write Apple apps, for example? How are you going to filter the died-in-the-wool embedded programmer from the Arduino crowd. The latter seem to far out number the former here.

#### Luca Wienert

Joined Apr 9, 2019
6
Thanks for your almost constructive response.

But Arduino programmers are fine.
I wanted to keep the audience as broad as possible (within a relevant set of people) to get sufficient responses. Thus, people programming microcontrollers would be ideal, but people that actively code are welcome as well. Further, you might know that psychographic and behavioural traits, which are examined in the survey as well, not necessarily have to be bound to a specific group of people. I rather collect these data from larger sets of people that not exactly correspond to the ideal survey participant than me ending up in having lack of responses.

#### Raymond Genovese

Joined Mar 5, 2016
1,658
What are the odds of receiving the gift card? How many gift cards are you going to give away per 100 responses from all sources?

#### jpanhalt

Joined Jan 18, 2008
11,088
What are the odds of receiving the gift card? How many gift cards are you going to give away per 100 responses from all sources?
Why should those odds affect your willingness to respond? Does that prove my case for bias?

#### Raymond Genovese

Joined Mar 5, 2016
1,658
Why should those odds affect your willingness to respond? Does that prove my case for bias?
I don't know what it proves? Am I the judge in your case? If so, you win...but only if you give me an Amazon gift card (I am easily bought).

#### WBahn

Joined Mar 31, 2012
26,398
To everybody familiar with Coding: I NEED YOUR SUPPORT. I am currently writing my bachelor thesis on the spread of software solutions through online communication (University: Fontys International Business School, Venlo). I now need to conduct a survey and YOU are my only chance to get viable data.

By leaving your contact info you get the chance to win Amazon gift cards (2x50€/$, 4x25€/$).

Approximated completion time: If you’re quick you can make it in 10 minutes.

Here’s the link to start the survey: https://www.surveymonkey.com/r/SFPTRKY

If you have any questions, feel free to comment below or send me a personal message.
It looks to me that you are going to draw conclusions based on assumptions about your sample group that a likely to be fundamentally flawed.

For instance, what is the basis for thinking that someone who self-selects themselves to participate is going to be representative of the much larger group of people that chose not to self-select themselves to participate?

What is the basis for thinking that people that participate on an online forum are going to be representative of the much larger group of people that do not participate?

What is the basis for thinking that people that participate because they have a chance of winning something are going to be representative of the much larger group of people that didn't participate despite that?

Do you even have a way of estimating what your participation rate is?

The bottom line is that meaningful conclusions can be drawn from analyzing a small amount of high quality data, it is very difficult to draw meaningful conclusions from any amount of poor quality data. Posting ads asking anyone and everyone that sees it to participate may be easy, but it also pretty much guarantees abysmally poor quality data.

#### djsfantasi

Joined Apr 11, 2010
8,075
Getting relevant data from polls are hard

#### jpanhalt

Joined Jan 18, 2008
11,088
When I was quite young as a student, we were presented with 3 or 4 problems about every other day. On the intervening days, we presented to our professors. Things were alphabetical, and I had the honor of presenting first. As I started though my findings and details, the professor interrupted and asked me, "If your were to publish this in xyzJournal, what would your title be?" Of course, we were all familiar with that journal, and I came up with something I can't remember. What I do remember was that lesson.

@Luca Wienert What will your title be for this study?

#### BR-549

Joined Sep 22, 2013
4,938
That was a terrible survey. Really. That survey site was the worst.

I tried to find a contact link to complain. That is NOT a serious site. If that was a survey......it's all junk. I am NOT talking about the context of the questions.

It's the way the questions are asked.........and the pathetic slow script interaction with server.

Might have missed some questions.....who knows with that service.

AND that was just taking a survey.

A survey, might(people lie) give you the answers to questions, But it will NEVER tell you the reasons for the answers.

It's the way math works. It will describe a relationship........but NEVER the cause.

Math is female.

#### Luca Wienert

Joined Apr 9, 2019
6
What are the odds of receiving the gift card? How many gift cards are you going to give away per 100 responses from all sources?
Hard to tell now, the more people participate, the lower the chances.

#### WBahn

Joined Mar 31, 2012
26,398
Hard to tell now, the more people participate, the lower the chances.
I'll help out by keeping the chances up.

#### Luca Wienert

Joined Apr 9, 2019
6
It looks to me that you are going to draw conclusions based on assumptions about your sample group that a likely to be fundamentally flawed.

For instance, what is the basis for thinking that someone who self-selects themselves to participate is going to be representative of the much larger group of people that chose not to self-select themselves to participate?

What is the basis for thinking that people that participate on an online forum are going to be representative of the much larger group of people that do not participate?

What is the basis for thinking that people that participate because they have a chance of winning something are going to be representative of the much larger group of people that didn't participate despite that?

Do you even have a way of estimating what your participation rate is?

The bottom line is that meaningful conclusions can be drawn from analyzing a small amount of high quality data, it is very difficult to draw meaningful conclusions from any amount of poor quality data. Posting ads asking anyone and everyone that sees it to participate may be easy, but it also pretty much guarantees abysmally poor quality data.
To the first questions: Yes you might be right that the group of people I am interviewing might not be the ideal sample, but its the best chance I have given the time available for my project. I simply don't have the time to conduct meticulous sampling.

Second question: If I don't provide any incentives, the response rates would be even lower.

#### WBahn

Joined Mar 31, 2012
26,398
To the first questions: Yes you might be right that the group of people I am interviewing might not be the ideal sample, but its the best chance I have given the time available for my project. I simply don't have the time to conduct meticulous sampling.

Second question: If I don't provide any incentives, the response rates would be even lower.
What you are saying in essence is that you really don't care whether your conclusions bear any resemblance to reality, you just want some numbers quickly to throw into your report to make it look acceptable and hope that no one reading it will care enough to question them.

You are likely at a pretty high risk of producing results having a negative amount of knowledge associated with them. The goal of any kind of research should be to produce an increase in the total knowledge base (along, of course, with whatever specific goals it has). But junk surveys not only produce meaningless results, which at best fail to add to the total knowledge, they tend to make people think that they have learned something when, in truth, they haven't. This is worse then not having done the work at all as it makes people believe that there is evidence for certain things to be true when, in fact, the actual truth is almost certainly something different -- so the actual total knowledge has been decreased because it has been polluted with wrong knowledge.

If you really insist on going down this road, a better approach would be to write your thesis without regard to the survey based on your conclusions drawn from your reading, research, and interviews and only after that is done look at your survey results and then simply report whether this simple non-scientific survey is in keeping with your conclusions or is counter to them. In either case, you can note that time and cost constraints precluded conducting a proper survey and that the results of the one you did do would best be viewed as being anecdotal.

#### Luca Wienert

Joined Apr 9, 2019
6
What you are saying in essence is that you really don't care whether your conclusions bear any resemblance to reality, you just want some numbers quickly to throw into your report to make it look acceptable and hope that no one reading it will care enough to question them.

You are likely at a pretty high risk of producing results having a negative amount of knowledge associated with them. The goal of any kind of research should be to produce an increase in the total knowledge base (along, of course, with whatever specific goals it has). But junk surveys not only produce meaningless results, which at best fail to add to the total knowledge, they tend to make people think that they have learned something when, in truth, they haven't. This is worse then not having done the work at all as it makes people believe that there is evidence for certain things to be true when, in fact, the actual truth is almost certainly something different -- so the actual total knowledge has been decreased because it has been polluted with wrong knowledge.

If you really insist on going down this road, a better approach would be to write your thesis without regard to the survey based on your conclusions drawn from your reading, research, and interviews and only after that is done look at your survey results and then simply report whether this simple non-scientific survey is in keeping with your conclusions or is counter to them. In either case, you can note that time and cost constraints precluded conducting a proper survey and that the results of the one you did do would best be viewed as being anecdotal.
First, the survey itself is created based on scientific principles. It allows all statistical tests I need to conduct. Content has been approved through several pretests.

True, that it might be a better approach to focus on reviewing existing data. As the research topic however is very specific, there isn't any data to be used for my purposes. Thus, I have two options: First, I don't answer the main research question as no statistical validation has been achieved. Second, answer the main research question based on biased results and appraise critically. Not the best way in general but given the framework conditions best decision in my case.

#### jpanhalt

Joined Jan 18, 2008
11,088
First, the survey itself is created based on scientific principles. It allows all statistical tests I need to conduct. Content has been approved through several pretests.
That contributes nothing about the validity of your results. How does one "approve" content by using pre-tests. That sounds to me like you gave the test to a few individuals who you considered knowledgeable in programming MCU's and adjusted the questions until you got the answers you wanted.

<snip> Thus, I have two options: First, I don't answer the main research question as no statistical validation has been achieved. Second, answer the main research question based on biased results and appraise critically. Not the best way in general but given the framework conditions best decision in my case. <snip>
I presume this is the main research question: "[a thesis]on the spread of software solutions through online communication."

1) How are you going to asses validity of the responses?
2) If your response rate is <10% of those questioned, how can you reasonably extend those results to the entire sampled group and imply anything about the much larger population from which that group was chosen? In the past, various simpler surveys on this forum have a response rate of about 20 individuals or less. Even if you get that large of a number, it is hardly representative of this forum as a whole, much less is this forum representative of the population of individuals working with MCU's professionally.
3) "Software solutions" with some devices, e.g., arduino, is seemingly driven by user forums and extensive libraries. Other devices, such as TI's offerings, may be much less so. Your survey does not stratify users by the devices used. Thus, you may get responses almost entirely from arduino users, not know that, and then apply your statistics for "software solutions" globally. Statistics will not solve that problem.
4) Your survey does have questions about "portability" and such, but no ranking for importance (i.e., mark all that apply). For some, portability is paramount, for others, it is not. You won't know the difference.