News Linux Foundation Bans University After It Intentionally Submitted Buggy Patches

Status
Not open for further replies.
Banning the University of Minnesota amounts to a slap on the wrist and nothing more.
The culprit(s) are still free to cause more damage if so desired.
How about a stiff monetary fine along with some jail time.
 
  • Like
Reactions: jbo5112
A somewhat similar incident happened in my university. A group of brilliant students wanted to do some "security tests" for their final paper. They tested their proposed method against some ISP-related networks without asking for permissions first (or something like that, I heard this from my professor so details are rather scarce, and I'm no networking guy).

While their method was valid and there was indeed an exploitable vulnerability, the fact they tested it without asking for permission first got the university into deep troubles with several ISPs, even the dean had to reason with the ISP and there was some monetary damages.

My professor used that incident as a real example of "the road to hell is paved with good intentions".
 
Well this is the strong point and weaknes of Linux. Any Person can contribute and any Person can cheack out the code. So hoax can be found.
The problem is that any Person can also make havok by making ”mal code” to the system and there is posibility that that code gets in the release version. And that was the point of what those students did try to prove. Just like some studenst would like to prove that a bank can be robbed, by making a robbery for the name of science 😉

Hopefully this even increase the check and cross check testing of Linux code before it goes to release stage.
 
Why didn't they release the names of the students? They should be blacklisted from the industry and put on a watchlist, I don't care if it's on the books legally or socially agreed upon, release their names! No one will hire them
 
So much misinformation circling about this story, along with people filling in their own details as they go. Anyone interested in what actually happened should probably take a look at the research paper in question and the relevant linux kernel maintainers' email thread.

The researchers who wrote the paper only created three 'bad' patches. They took steps to ensure that these patches never actually made it into the kernel, as well as tried to minimize the impact on the maintainers' time. See section VI-A in the paper. So people should probably put down their pitchforks. As an aside, it wasn't two students; it was a student and a prof.

At some later date, another person/people from the university submitted some more patches. It's not clear if they have a direct relation to the two authors, other than working at the same place. These patches do not appear to be malicious, but maintainers felt that they were largely useless and/or time-wasting submissions. The submitters claim the patches were the result of the output of a new static analysis tool they were trying out, and insist they were submitted in good faith (which the maintainers didn't buy). This is, along with the 'experiment' the two researchers conducted, are what prompted Kroah-Hartman's statement.

With regard to Kroah-Hartman's statement about "rip[ping] out your previous contributions, as they were obviously submitted in bad-faith with the intent to cause problems", he was seemingly referring to all past contributions from people with umn.edu email addresses (not necessarily the author's of the paper). As far as I'm aware there's no evidence that these past submissions had (deliberate) vulnerabilities, and in the email thread one of the maintainers' even vouches for some of the past patches as being legit.
 
Last edited:
  • Like
Reactions: cyrusfox
As opposed to all the other patches which are unintentionally buggy.

The significant differentiating aspect is intent.

You may make code that in good faith you believe should work right over the majority of the machines running the software.

You purposefully create malicious aspect to your code so that you can prove a point for a paper and grade in a class. It crossed the line to black hat, plain and simple.
 
Their execution? Yeah that was a mistake.
To be honest, I'm not sure what the right way to do this would have been. They didn't even create pull requests so there was no real chance of their 'bad' patches getting in, so there's no concern there. So we're left with the concern of wasting people's time, and 'experimenting' on people without their knowledge . The problem is you can't tell the person who will be reviewing the patches, as that would obviously affect the result. And even if you tell one of the other maintainers, they're not in a position to give you permission to conduct the expirement on a different maintainer, as no one is anyone else's boss that they're in a position to speak for the other's time. They did try to mitigate the time aspect, by making the patches only a few lines long. They justify not receiving consent from the experiment 'participants' by saying that the experiment is being conducted on the procedures/systems in place, not on individuals, thus there are no human ethical concerns (which the school's research review board apparently agreed with). But that justification can obviously be debated.
 
You may make code that in good faith you believe should work right over the majority of the machines running the software.
If open source projects, even hugely important ones that are used worldwide, depend on the assumption that all contributors are operating in good faith, that's quite concerning.

You purposefully create malicious aspect to your code so that you can prove a point for a paper and grade in a class.
Nitpicking, but do PhD students even have classes or grades?
 
The problem is you can't tell the person who will be reviewing the patches, as that would obviously affect the result. And even if you tell one of the other maintainers, they're not in a position to give you permission to conduct the expirement on a different maintainer, as no one is anyone else's boss that they're in a position to speak for the other's time.
I'm pretty sure the Linux Foundation still has a smallish group of people who have the final say on what goes into the mainline, and hence have the authority. They can still be informed and have someone else handle the review.

It's like say you want to do this on a software company. You inform one of the higher ups you want to do this experiment, they agree, and before the next version of the software goes out, they see if anyone caught the problematic change. The only problem afterwards is ensuring the changes used in the experiment are cleaned out before the actual final release is done.
 
I'm pretty sure the Linux Foundation still has a smallish group of people who have the final say on what goes into the mainline, and hence have the authority. They can still be informed and have someone else handle the review.

It's like say you want to do this on a software company. You inform one of the higher ups you want to do this experiment, they agree, and before the next version of the software goes out, they see if anyone caught the problematic change. The only problem afterwards is ensuring the changes used in the experiment are cleaned out before the actual final release is done.
Like I said, making sure that the patches (the 'malicious' ones by the two researchers) didn't make it into the kernel wasn't a concern. The question is who has the authority to approve, on someone else's behalf (and without their knowledge), for the researchers to conduct their experiment on that person? Kroah-Hartman is the lead maintainer for the linux kernel, so lets say you go an ask him if it's OK and he agrees. That doesn't mean the person who eventually ends up reviewing the patches will feel the same way. Does Kroah-Hartman have the right to sign up some other maintainer (who I'm assuming is an unpaid volunteer) to be a guinea pig without their knowledge?

Getting Kroah-Hartman's (or some other high-up's) prior approval would certainly have improved the optics, and spared them getting banned. But if the concern is that it's unethical to try to trick someone and/or waste their time without their knowledge, I don't know that getting Kroah-Hartman's approval really helps that. It's different if it's an employee at a company, at least from time wasting perspective. If your boss thinks participating in the experiment is a good use of your time that's their call, and you get paid either way.
 
Last edited:
But if the concern is that it's unethical to try to trick someone and/or waste their time without their knowledge, I don't know that getting Kroah-Hartman's approval really helps that.
I would argue it's not unethical to trick someone. Otherwise using placebos in clinical trials would be unethical. But you need placebos to establish a control.

The wasting time portion I would argue here is also isn't unethical if someone who was in the know assigned said task to someone. The task still has some purpose, even if the purpose isn't directly related to software development of the kernel.

EDIT: The more I think about this, the more this is looking like a psychology experiment.
 
I would argue it's not unethical to trick someone. Otherwise using placebos in clinical trials would be unethical. But you need placebos to establish a control.

The wasting time portion I would argue here is also isn't unethical if someone who was in the know assigned said task to someone. The task still has some purpose, even if the purpose isn't directly related to software development of the kernel.

EDIT: The more I think about this, the more this is looking like a psychology experiment.

Actually, that's not how clinical trials are done, because it is unethical.

In clinical trials, both group knows they may or may not receive the placebo/treatment, so no one is tricked.

And in most trials, even the placebo group also receive the treatment if it's considered beneficial, they just receive it later.
 
Status
Not open for further replies.