How can we check if a function is linear? So how do we do when checking if the following function is linear? Example: f (x1, x2, x3) = x1 ⊕ x3 ⊕ x1x2 Please show all steps how to check it.
What operation does ⊕ represent? What operation is implied by X1X2? What set are X1, X2, and X3 contained in? Alternatively what is the domain of the function f? Do you have a method for extending the definition (L1) to a function of 3 variables? Do you have a method of extending the definition (L2) to a function of 3 variables?
The more precise answer to the third question is that X1, X2, and X3 belong to {0, 1} or {TRUE,FALSE} or any othe two element set of your choosing. Start with (L2) Is f(αX1, X2, X3) ≡ αf(X1, X2, X3)? Now you just keep going until you exhaust all the cases.
This is your homework. My purpose here is not to show you how it should be done, but to help you find your own way to do it. Why worry about (L1) until you have convinced yourself that (L2) works in all cases. I'm suggesting you start with (L2) because that is the easier thing to evaluate. dont forget the following case Is f(αX1, βX2, γX3) ≡ αβγf(X1, X2, X3) ? I'd be surprised if it was.
There is a bit of a problem. I'm not sure how to rigorously state the principle of (L1) for a function of 3 variables. I was hoping for a counterexample from (L2) to obviate the necessity of proceeding further. If you can come up with a formulation of (L1) for a function of three variables it would help. We know from the set of multivariable polynomials over the real numbers that a term such as xy automatically means that the polynomial is nonlinear. I don't know if you can draw a similar conclusion from the AND operation in the original function. If I had to guess, I would say yes.
Is that way of writing really in accordance with the L2 definition? Wouldn't what one should check rather be the following: f(αβγX1, αβγX2, αβγX3) ≡ αβγf(X1, X2, X3) ? Or can you just multiply scalars with as many or as few of the variables as you please? It doesn't look like that in the definition to me.
The way I look at it you don't really need to multiply all of the variables by three scalars. The point of (L2) is that any, or all, of the scalers can be brought outside the function brackets. It should also be possible to demonstrate that f(αX1, βX2, γX3) ≡ αf(X1, βX2, γX3)
δ But this isn't required by the property of linearity, is it. z = f(x, y) = x + y Isn't this a linear function of x and y? But is f(5x, y) = 5·f(x, y)? Linearity requires that superposition hold which, in one variable, means that y = f(α·x1 + β·x2) = α·f(x1) + β·f(x2) I think (but am not positive) that the extension of the concept of a linear function to functions of multiple variables is the following: Given z = f(x,y) then this is linear if and only if f((α·x1 + β·x2), (γ·y1 + δ·y2)) = α·f(x1, 0) + β·f(x2, 0) + γ·f(0, y1) + δ·f(0, y2)
I think that, for multiple variables the property you are basically looking for is f(x,y) = f(x,0) + f(0,y)
Well part of the problem is that I can't find any verification that any particular formulation of the linearity tests, L1 and L2, extends to multivariable functions. On top of that it is less than clear that boolean operations like AND and XOR posess linearity properties with multivariable functions. With respect to your proposition, I'm not convinced without a more rigorous proof. The closest piece of anecdotal information I posess is that an affine equation like: Ax + By + Cz + D = 0 Represents a plane in ℝ3 and is linear.
No I can't, but I haven't really thought about it. I'm guessing you asked the question because you couldn't think of one either.
Finally from wikpedia. Hard to interpret but definitive http://en.wikipedia.org/wiki/Linearity In Boolean algebra, a linear function is a function for which there exist such that , where A Boolean function is linear if one of the following holds for the function's truth table: In every row in which the truth value of the function is 'T', there are an odd number of 'T's assigned to the arguments and in every row in which the function is 'F' there is an even number of 'T's assigned to arguments. Specifically, f('F', 'F', ..., 'F') = 'F', and these functions correspond to linear maps over the Boolean vector space. In every row in which the value of the function is 'T', there is an even number of 'T's assigned to the arguments of the function; and in every row in which the truth value of the function is 'F', there are an odd number of 'T's assigned to arguments. In this case, f('F', 'F', ..., 'F') = 'T'. Another way to express this is that each variable always makes a difference in the truth-value of the operation or it never makes a difference. Negation, Logical biconditional, exclusive or, tautology, and contradiction are linear functions. So for the problem that the OP posed originally one writes the truth table and examines each row. Turns out the X1X2 terms is also a nonlinearity marker, just like it is for functions over the reals.
Nope, I can't either and I've tried. Although I guess f(x,y,z) = 0 would work, but that might be the only one.
I don't think it IS linear (at least not a linear function), I think it is affine. My understanding is that there is a difference between a linear equation and a linear function. f(x) = mx + b is NOT a linear function because superposition does not hold. It is, however, an affine function. But I've looked a bit and can't find a clear definition of a linear multivariable function. The closest I've found is linear maps, but those are couched in terms and concepts that I am just not too conversant with.
The drift I'm getting is that if the derivatives are constant the function is linear. This means that apparently affine functions are also linear since the derivative of a constant vanishes.