|
An Explanation of Computation Theory for Lawyers |
 |
Wednesday, November 11 2009 @ 07:48 PM EST
|
If I had to describe the fairly universal geek reaction to the oral argument at the US Supreme Court on Monday in In Re Bilski, I would have to say it's a worry that some of the participants didn't seem to understand computers or the tech behind software very well. Groklaw member PolR has written an explanation for lawyers of computation theory to try to fill a gap in their knowledge that he has observed from reading
legal briefs.
A brief extract: Lawyers and judges know about modern electronics in general and computers in particular. They know about code and the compilation process that turns source code into binary executable code. Computation theory is something different. It is an area of mathematics that overlaps with philosophy. Computation theory provides the mathematical foundations that make it possible to build computers and write programs. Without this knowledge several of the fundamental principles of computer science are simply off the radar and never taken into consideration.
The fundamentals of computation theory are not obvious. A group of the greatest mathematicians of the twentieth century needed decades to figure them out. If this information is not communicated to lawyers and judges they have no chance to understand what is going on and mistakes are sure to happen. All the errors I have found result from this omission.
The purpose of this text is to help fill the gap. I try to explain all the key concepts in a language most everyone will understand. I will underline the key legal questions they raise or help answer as they are encountered. I will conclude the article with examples of common mistakes and how the knowledge of computation theory helps avoid them. Yes. We might as well give it a try. After all, we've spent a lot of time and effort explaining the legal process to geeks; now it's time for the geeks to help the lawyers out with the tech. They actually do want to get it right, you know. You'll find links to the main cases cited at the end, along with some helpful resources.
Feel free, then, those of you who are also programmers and engineers, to amplify, correct, explain, or if there is another topic you feel lawyers and judges need to understand, email me about it, and let's see if we can help them out. Similarly, if you are a lawyer or judge, feel free to ask questions or ask that further information be provided. What I get from the paper is that there isn't really a difference between what a human does with a paper and pencil and what a computer does, except speed, and that neither method of computation should be patentable subject matter. Also, it's now clear that just as you wouldn't go into litigation without a lawyer, if you are wise, because that is their area of expertise, lawyers also shouldn't assume they understand software and patents and how they relate unless they get help from technology experts who know how software and computers, and the underlying math, really work.
Update: If you'd like to print the article without the introduction, I've placed a copy here. And here's a PDF.
And for those of you who are interested in digging a bit more into computational theory, with an emphasis on the theory as opposed to the consequences with regard to patents, here's a reference for you, A Problem Course in Mathematical Logic, by Stefan Bilaniuk. Here's a description of what you'll find:
Parts I and II, Propositional Logic and First-Order Logic respectively, cover the basics of these topics through the Soundness, Completeness, and Compactness Theorems, plus a little on applications of the Compactness Theorem. [...] Part III, Computability, covers the basics of computability using Turing machines and recursive functions; it could be used as the basis of a one-term course. Part IV, Incompleteness, is concerned with proving the Gödel Incompleteness Theorems.
*****************************
An
Explanation of Computation Theory for Lawyers
By PolR
[This article is licensed under a Creative Commons License.]
I
am a computer professional with over 25 years of experience. I have a
Master's degree in computer science and no legal expertise beyond what
I acquired reading Groklaw. Yet I have developed an interest in
understanding US patent law and how it applies to software. I have
read a few court cases and legal briefs. Based on my professional
knowledge I can say that in some precedent-setting cases the legal
understanding of computers and software is not technologically
correct.
I
see the situation like this. The authors of legal briefs and court
rulings have enough of an understanding to feel confident they can
write meaningful arguments on the topic. But yet they do not
understand computers and software well enough to reach technically
correct conclusions. The unfortunate result is legal precedents that
do not connect with reality. Computers don't work the way some legal
documents and court precedents say they do1.
I
think I can identify the root cause of this. It seems that
nobody has ever taught the legal community the basic concepts of
computation theory. At the very least I don't find any evidence that
computation theory is known in the cases and legal briefs I have read
so far. Lawyers and judges know about modern electronics in general
and computers in particular. They know about code and the compilation
process that turns source code into binary executable code.
Computation theory is something different. It is an area of
mathematics that overlaps with philosophy2.
Computation theory provides the mathematical foundations that make it
possible to build computers and write programs. Without this
knowledge several of the fundamental principles of computer science
are simply off the radar and never taken into consideration.
The
fundamentals of computation theory are not obvious. A group of the
greatest mathematicians of the twentieth century needed decades to
figure them out. If this information is not communicated to lawyers and
judges they have no chance to understand what is going on and
mistakes are sure to happen. All the errors I have found result from
this omission.
The
purpose of this text is to help fill the gap. I try to explain all
the key concepts in a language most everyone will understand. I will
underline the key legal questions they raise or help answer as they
are encountered. I will conclude the article with examples of common
mistakes and how the knowledge of computation theory helps avoid
them.
Let's
illustrate the importance of computation theory with one of the legal
issues computation theory will help resolve. Consider the following
list of statements.
- All
software is data.
- All
software is discovered and not invented.
- All
software is abstract.
- All
software is mathematics.
If
my understanding of the US patent law is correct, whether or not any
of these statements is true could determine whether or not
software is patentable subject matter. The resolution of this issue
has serious consequences to the computer electronics and software
industries.
When
you know computation theory, you know without a shred of a doubt that
each of these statements states a fact that is grounded in well-established mathematics. If you don't know computation theory, these
statements will probably look to you like debatable issues.
Effective
methods
Historically,
modern computation theory started in the 1920s and 1930s with
research on "effective methods". In a totally untypical manner,
the definition of this mathematical concept is written in plain
English as opposed to some mathematical notation that would be
impenetrable to laymen. Here it is3:
A
method, or procedure, M, for achieving some desired result is called
'effective' or 'mechanical' just in case
M
is set out in terms of a finite number of exact instructions (each
instruction being expressed by means of a finite number of symbols);
M
will, if carried out without error, always produce the desired
result in a finite number of steps;
M
can (in practice or in principle) be carried out by a human being
unaided by any machinery save paper and pencil;
M
demands no insight or ingenuity on the part of the human being
carrying it out.
The
terms 'effective' or 'mechanical' are terms of art in mathematics and
philosophy. They are not used in their ordinary, everyday meaning.
The phrase "effective method" is a term of art. This term has
nothing to do with the legal meaning of "effective" and "method".
The fact that these two words also have a meaning in patent law is a
coincidence.
An
effective method is a procedure to perform a computation that is
entirely defined by its rules. The human that carries out the computation
must follow the rules strictly; otherwise the answer he gets is not
the one that is intended. Examples of effective methods were taught
to us in grade school when we learned how to perform ordinary
arithmetic operations like additions, subtractions, multiplications
and divisions. We were taught procedures that are to be carried out
with pencil and paper. You have to perform these operations exactly
how they were taught, otherwise the result of the calculation is
wrong.
This
definition was written in the early part of the 20th century, before
the invention of the electronic computer. This was before the word
"algorithm" came to be used in computer science. Effective
methods are precursors to computer algorithms. The definition works
from the assumption that you need a human to do a computation4.
Clause 3 explicitly mentions that the computation must be one that it
is possible for a human to carry out with pencil and paper. Aid from
machinery is explicitly forbidden. In effect this definition spells
out what kind of human activity would qualify as an algorithm, but
without using that terminology. The term "effective method" was
used instead.
Of
course this concept has since been superseded with the modern concept
of computer algorithm. Mathematical discoveries from the 1930s have
shown the human requirement may be dropped from the definition
without changing the mathematical meaning. Effective methods and
computer algorithms are the same thing, except that computers are
allowed to execute algorithms without running afoul of the
definition. This is known as the Church-Turing
thesis. This is one of the most important results of computation
theory and theoretical computer science. A big part of this text will
be devoted at explaining exactly what this thesis is, how
mathematicians discovered it and what is the evidence we have of its
truth.
There
are a few fine points in this definition that deserve further
explanations. Clause number 3 states that an effective method can be
carried out "in practice or in principle" by a human being. This
language is there because mathematics is about abstract concepts. You
don't want an effective method to stop being an effective method when
a human runs out of pencils or paper. You don't want the method to
stop being an effective method when a human dies of old age before
being done with the calculation. This is the sort of thing that may
happen if, for example, you try to calculate a ridiculously large
number of decimals of π5.
By stating the method may be carried out "in principle" the
definition makes abstraction of these practical limitations.
Clause
number 4 states that no demands are made on insight or ingenuity. This
clause implies that all the decisions that are required to perform
the calculation are spelled out explicitly by the rules. The human
must make no judgment call and use no skill beyond his ability to
apply the rules with pencil and paper. The implication is that the
human is used as a biological computing machine and serves as a
reference for defining what a computation is.
Effective
methods and the Church-Turing thesis are relevant to an issue that
has legal implications. What is the relationship between an algorithm
executed by a computer and a human working with pencil and paper?
Abstract methods made of mental steps are not patentable subject
matter6.
When is software such a method? When is it a physical process? With
knowledge of computation theory and the Church-Turing thesis, lawyers
and courts will be able to tap on the discoveries of mathematics to
answer these questions with greater accuracy.
Formal
Systems
Why
were mathematicians so interested in effective methods in the 1920s?
They wanted to solve a problem that was nagging them at the time.
Sometimes someone discovered a theorem and published a proof. Then
some time passes, months, years or decades, and then someone
discovered another theorem that contradicted the first and published a
proof. When this happens, mathematicians are left scratching their heads.
Both theorems cannot be true. There must be a mistake somewhere.
If the proof of one of the theorems is wrong then the problem is solved. They revoke the
theorem with a faulty proof and move on. But what if both proofs are
sound and stand scrutiny?
These
kinds of discoveries are called paradoxes. They are indicative that
there is something rotten in the foundations that define how you
prove theorems. Mathematical proofs are the only tool that can
uncover mathematical truth. If you never know when the opposite
theorems will be proven, you can't trust your proofs. If you can't
trust your proofs, you can't trust mathematics. This is not
acceptable. The solution is to look at the foundations, find the
error and correct it so paradoxes don't occur any more. In the early
twentieth century, mathematicians were struggling with pretty nasty
paradoxes. They felt the need to investigate and fix the foundations
they were working on7.
The
relationship between algorithms and mathematical proofs is an
important part of computation theory. It is part of the evidence that
software is abstract and software is mathematics. The efforts to fix
the foundations of mathematics in the early twentieth century are an
important part of this story.
One
of the most prominent mathematicians of the time, David Hilbert,
proposed a program that, if implemented, would solve the problem of
paradoxes. He argued that mathematical proofs should use formal
methods of manipulating the mathematical text, an approach known as
formalism. He proposed to base mathematics on formal systems that are
made of three components.
- A
synthetic language with a defined syntax
- An
explicit list of logical inference rules
- An
explicit list of axioms
Let's
review all three components one by one to see how they work.
Mathematics uses special symbols to write propositions like a+b=c or
E=mc2. There are rules on how you use these symbols. You
can't write garbage like +=%5/ and expect it to make
sense. The symbols together with the rules make a synthetic language.
The rules define the syntax of the language. Computer programmers
use this kind of synthetic language every day when they program the
computer. The idea of such a special language to express what can't
be easily expressed in English did not originate with computer
programmers. Mathematicians thought of it first8.
How
do you test if your formula complies with the syntax? Hilbert
required that you use an effective method that verifies that the
rules are obeyed. If you can't use such a method to test your syntax
then the language is unfit to be used as a component of a formal
system9.
Inference
rules are how you make deductions10.
A deduction is a sequence of propositions written in the language of
mathematics that logically follows. At each step in the sequence there
must be a rule that tells you why you are allowed to get there given
the propositions that were previously deduced. For example suppose
you have a proof of A and separately you have a proof of B. Then the
logical deduction is you can put A and B together to make "A and
B". This example is so obvious that it goes without saying. In a
formal system you are not allowed to let obvious things be untold.
All the rules must be spelled out before you even start making
deductions. You can't use anything that has not been spelled out no
matter how obvious it is. This list of rules is the second component
of a formal system.
It
was known since the works of philosopher Gottlob Frege, Bertrand
Russell, and their successors that all logical inference rules can be
expressed as syntactic manipulations. For example, take the previous
example where we turn separate proofs of A and B into a proof of "A
and B" together. The inference consists of taking A, taking B, put
an "and" in the middle and we have "A and B". You don't need
to know what A and B stand for to do this. You don't need to know how
they are proved. You just manipulate the text according to the rule.
All the inferences that are logically valid can be expressed by rules
that work in this manner11.
This opens an interesting possibility. You don't use a human judgment
call to determine if a mathematical proof is valid. You check the
syntax and verify that all inferences are made according to the rules
that have been listed12.
This check is made by applying an effective method13.
You may find a computerized language for writing and verifying proofs
in this manner at the Metamath Home
Page.
If
there is no human judgment call in verifying proofs and syntax, where
is human judgment to be found? It is when you find a practical
application to the mathematical language. The mathematical symbols
can be read and interpreted according to their meanings. You don't
need to understand the meaning to verify the proof is carried out
according to the rules of logic, but you need to understand the
meaning to make some real-world use of the knowledge you have so
gained. For example when you prove a theorem about geometry, you
don't need to know what the geometric language means to verify the
correctness of the proof, but you will need to understand what your
theorem means when you use it to survey the land.
Another
place where human judgment is required is in the choice of the
axioms. Any intuitive element that is required in mathematics must be
expressed as an axiom. Each axiom must be written using the
mathematical language. The rules of logic say you can always quote an
axiom without having to prove it. This is because the axioms are
supposed to be the embodiment of human intuition. They are the
starting point of deductions. If some axiom doesn't correspond to
some intuitive truth, it has no business in the formal system. If
some intuitive truth is not captured by any axiom, you need to add
another axiom14.
The
list of axioms are the third component of a formal system. Together
the syntax, the rules of inferences and the axioms include everything
you need to write mathematical proofs. You start with some axioms and
elaborate the inferences until you reach the desired conclusion. Once
you have proven a theorem, you add it in the list of propositions you
can quote without proof. The assumption is that the proof of the
theorem is included by reference to your proof. And because all the
rules and axioms have been explicitly specified from the start and
meticulously followed, there is no dark corner in your logic where
something unexpected can hide and eventually spring out to undermine
your proof.
To
recapitulate, effective methods play a big role in this program. They
are used (1) to verify the correctness of the syntax of the
mathematical language and (2) to verify that the mathematical proofs
comply with the rules of inferences. The implication is that there is
a tie between formal methods and abstract mathematical thinking. If
you consider the Church-Turing thesis, there is a further implication
of a tie between computer algorithms and abstract mathematical
thinking. This article will elaborate on the nature of this tie.
Let's
return to Hilbert's program. When you make explicit all part of the
mathematical reasoning in a formal system, the very concept of
mathematical proof is amenable to mathematical analysis. You can
write proof about how proofs are written and analyze what properties
various formal systems have15.
Hilbert intended to use this property of formal systems to bring the
paradoxes to a permanent end. Hilbert's program was to find a formal
system suitable to be the foundations of all of mathematics that
would meet these requirements:
- The
system must be consistent. This means there must be no proposition
that could be proven both true and false. Consistency ensures there
is no paradoxes.
- The
system must be complete. This means every proposition that could
possibly be written in the system could be either proven true or
proven false. Completeness ensures that every mathematical question
that could be formulated with mathematical language could be solved
within the confine of the system.
- The
system must be decidable. This means that there must be an effective
method to find at least one actual proof of a proposition when it is
true or at least one actual refutation of the proposition when it is
false16.
Decidability ensures that when confronted with a mathematical
problem we always know how to solve it.
If
the formal system is amenable to mathematical analysis then,
according to Hilbert, it would be possible to find a mathematical
proof that the system meet the requirements of consistency,
completeness and decidability17.
Completing such a program would be a formidable intellectual
accomplishment. All of mathematics would be contained in a system
based on axioms, inference rules and effective methods. Every
mathematical question would be solved by carrying out a decision
method where human judgment plays no part. In this approach, the role
of human intuition is limited to the selection of the axioms.
From
Effective Methods to Computable Functions
Hilbert's
program was successful in part and failed in part. Useful formal
systems that appear to be free of paradoxes were developed18.
This is the successful part. The failed part is there is no formal
system that can be used as the foundation of the entirety of
mathematics and is also consistent, complete and decidable. Such
formal systems are not possible. Theorems that show that these
properties are mutually incompatible have been found. But before the
mathematicians reached this conclusion, substantial research was done.
It is in the course of this research that the fundamental concepts of
computation theory were developed. Ultimately these findings have
led to the development of the modern programmable electronic computer19.
When
working on Hilbert's program, mathematicians were struggling
with another nagging issue. Hilbert's program called for every
mathematical concept to be defined using a special mathematical
language. How do we write the definition of effective method in such
a language? The reference to a human being calculating with pencil
and paper is not easily amenable to this kind of treatment. This is
why the original definition is written in plain English. Some
specific effective methods could be defined mathematically. Effective
methods in general were something a mathematician would know when he
sees one but couldn't be defined with mathematical formulas.
The
work on Hilbert's program is what led to the discovery of how to
define mathematically the concept of effective methods. Several
different but equivalent definitions were found. To reflect the
definitions being used, effective methods are called "computable
functions" when we refer to the mathematical definitions. When
effective methods are considered for use with a computer instead of
being carried out by a human, they are called "computer algorithms".
When translated in a programming language for use on an actual
computer, they are called "computer programs".
These
multiple definitions of computable functions led to very different
methods of how to actually make the calculations. A human being that
would use one of these system would write on paper something very
different from someone using the other system. But still they would
all do the same calculations. The reason is that algorithms are
abstractions different from the specific steps of how the calculation
is done20.
For
example, consider a sorting algorithm written in a programming
language like C. You give the program a list of names in random
order and the program prints them sorted in alphabetic order. The
program may be compiled for Intel x86, for Sun SPARC, for ARM or any
other make and brand of CPUs. The binary instructions generated by
the compiler for one CPU will be very different from the instructions
for the other CPUs because this is how CPUs are made. If you trace
the hardware instructions that are actually executed with a debugger,
you will verify that the execution of the code is different from CPU
to CPU because of the difference in the instructions. But still it is
the same C program that executes. The names will be printed out in
sorted order on all CPUs. The abstract sorting algorithm is the same
on all CPUs.
This
difference between specific instructions and abstract algorithm has
its basis in computation theory. By the time you will be done reading
this article you will know what that basis is.
There
are three of the definitions of computable functions that stand as
the most important ones. The Church-Turing thesis states that
computable functions as defined by these definitions are the same
functions as those one may compute with effective methods.
- Kurt
Gödel's recursive functions
- Alonzo
Church's lambda-calculus
- Alan
Turing's Turing machines
One
needs to know all three definitions to understand computation theory.
All three definitions played a role in the discovery that Hilbert's
program is impossible to accomplish. All three definitions bring
considerable insights as to what an algorithm is. All three
definitions are equivalent to both effective methods and computer
algorithms. Together they explain why the work of a human with pencil
and paper and the work of a computer manipulating bits follow the
exact same abstract processes.
Now
let's look each of these three definitions one by one and flesh out
the most legally relevant concepts of computation theory.
Kurt
Gödel's Recursive Functions
In
mathematics the word "function" is a term of art. A function is a
correspondence between a pair of mathematical objects that is such
that the second object is determined by the first. For example the
area of a circle is a function. When you know the circle, you can
find out the area. Addition is a function. You know a pair of
values21
and you add them up to get the result. Often functions are associated
with a method or formula that state how to calculate the value of the
function.
This
terminology issue being settled, let's go back to the main topic.
Kurt Gödel wasn't looking to define computable functions. This
happened almost by accident. He was trying to do something very
different and a few years later other mathematicians found out the
implications of what he did.
Kurt
Gödel was working on the Hilbert program. He was working on a formal
system called "Peano arithmetic" that defines the arithmetic of
positive integers. This is the numbers 0, 1, 2, 3 .... to infinity.
The basic principle of Peano arithmetic is to base the entirety of
arithmetic on the ability to count. This is a foundation made of two
basic components. The first one is the number zero (0). The other one
is the successor function that increases a number by one. The
successor of 0 is 1, the successor of 1 is 2 etc. But the language of
Peano arithmetic is too rudimentary to write numbers this way. It can
only write the successor by appending an apostrophe. Zero is written
0, but 1 is written 0' and two is written 0'' and you continue like
this appending one apostrophe every time you increment by one. You
can identify a number by counting the apostrophes that are appended
to 0.
Why
are mathematicians bothering to do this? It is because this is
research on the foundations. The goal is to find out what are the
most minimal concepts that could serve as the basis of arithmetic. Do
we need addition? Do we need subtraction? Do we need Arabic numbers?
Or can we do everything with zero and successors alone? If the
foundational concepts are kept very minimal, it reduces the risks
that some complexity hides paradoxes that will pop up after a decade
or two of research. It also makes proofs about the formal system
simpler.
Remember
the Hilbert program. There is a requirement to analyze the formal
system to prove consistency, completeness and decidability. If the
formal system is complex, these proofs will be hard to write. The
hope is to make these proofs easier by keeping the formal system
simple.
Gödel
was studying a method of defining arithmetic functions called
recursion. When you write a recursive definition22
you must specify two rules.
- You
define the value of the function for the number zero
- Assuming
you know the value of the function for an arbitrary number n, you
define how you compute the function for n'.
With
these two rules you know how to compute the value of the function for
all integers. Rule 1 tells you the value of the function for 0. Then
you use rule 2 to compute the value of the function for 1 given that
you know the value for 0. Then you proceed to compute the value for 2
and then 3 etc. As an example here is how you define addition using
recursion.
- m
+ 0 = m
- m
+ n' = (m + n)'
For
the sake of example assume you want to compute 5+3. Starting with
5=0''''' and 3=0''' the steps go as follow:
5 + 3 = 0''''' + 0''' |
Translation into the Peano arithmetic language |
0''''' + 0''' = (0''''' + 0'')' |
by rule no 2 |
(0''''' + 0'')' = (0''''' + 0')'' |
by rule no 2 |
(0''''' + 0')'' = (0''''' + 0)''' |
by rule no 2 |
(0''''' + 0)''' = (0''''')''' |
by rule no1 |
(0''''')''' = 0'''''''' |
because the parentheses can be removed at this point
|
0'''''''' = 8 |
Translate back into Arabic numbers
|
This
illustrates how recursion works as an effective method. Every time
you define something using recursion, you actually define an
effective method to perform the calculation.
This
example also shows why no one will want to actually make additions in
this manner. The process is way too cumbersome. The goal is not to
provide an alternative to the usual methods of doing arithmetic. The
goal is to provide a foundation that is sound and free of paradoxes.
The
purpose of the recursive definition of addition is to provide a
reference. Other methods to add numbers are allowed but only if we
can prove a theorem that proves these methods will yield the same
answer as the recursive definition. Without the theorem we have no
guarantee that the alternative calculation will yield the correct
answer. The point of the theorem is to provide the guarantee that the
procedures we use to calculate additions rest on the foundations.
Formal
systems work like an edifice. You have a small number of concepts and
associated axioms and rules of inference that provide a foundation.
But the foundation is not used directly because it is too minimal to
be usable in a practical situation. You must build theorems and
theorems over theorems until you have a body of knowledge that is
usable. Part of the task required by the Hilbert program was to build
the edifice to show the proposed foundations are suitable for the
intended purpose. This is what Gödel's concept of recursion is
contributing to.
Gödel
Numbers
When
working on recursion, Gödel got a revolutionary insight. He noticed
something useful about prime numbers.
I
suppose you remember prime numbers from your high school arithmetic.
But if you don't let's recapitulate. A prime number is a number
greater than 1 that can be divided evenly only by 1 or itself. There is an
infinite sequence of prime numbers: 2, 3, 5, 7, 11, 13 ... etc. Every
number can be factored into the product of prime numbers. Here are a
few examples.
When
writing a prime number factorization it is customary to omit the
multiplication symbols. They are implicit. 2231
actually means 22 x 31.
Look
at the exponents. Sometimes you need to multiply the same prime
number multiple times to get the result. 2231 actually
means 2 x 2 x 3. An exponent of 1 means you multiply one times. An
exponent of 0 means you don't need to multiply the prime number.
203253 actually means 3 x 3 x 5 x 5
x 5.
What
if you assign numbers to an alphabet? For instance suppose you assign
a=1, b=2 until z=26? Then you can use prime numbers to encode text
into numbers. You can turn "Pamela" into something like
2p3a5m7e11l13a.
Replace the letters with their corresponding numbers and the text
becomes a number. You can also do the reverse. When you want to read
the text you factor the number into primes and then turn the
exponents back into letters. This works because there is only one way
to factor a number into prime numbers; therefore a number can only
translate into one text. There can be no ambiguity.
Numbers
that are used to encode text in this manner are called Gödel
numbers.
Ordinary
text editing operations people can do with pencil and paper can also
be defined using recursion. For example you can use arithmetic to put
2g3r5o7k and 2l3a5w
together to make 2g3r5o7k11l13a17w.
This corresponds to the use of pencil and paper to put "grok" and
"law" together to make "groklaw". Arithmetic operations that
manipulate the exponents of prime numbers are the mirror images of the
symbolic manipulations on text.
This
is time to go back to formal systems. Put this idea together with
what we have just said on prime numbers. We get that formal systems
themselves have a mirror image in arithmetic. This is the case for
Peano arithmetic. There the logic goes:
- All
the symbols can be translated into numbers.
- The
effective methods that test when a mathematical proposition made of
symbol is syntactically correct can be translated into a recursive
function that makes the same test on exponents of prime numbers.
- The
effective method that tests when a proof is made according to the
inference rules can also be translated into a recursive function
that makes the same test on exponents on prime numbers.
- All
the axioms can be translated into exponents on prime numbers by
translating their symbols.
Then
the entire formal system has been translated into arithmetic23.
This is called the "arithmetization of syntax". It is one of the
greatest mathematical discoveries of the twentieth century. It has
many far reaching consequences24,
some of them are germane to patent law.
Symbols
and Reducibility
Gödel
numbers are evidence that the symbols of the mathematical language
need not be visual patterns written on paper. They can also be
something abstract like the exponents of prime numbers. Or if we take
a more modern look, they can be 0s and 1s stored as electromagnetic
signals in electronic devices. In all cases the symbols have the same
meaning and when we use them to write mathematical statements they
allow to write the same logical inferences. This observation raise
more issues. What do we mean, "have the same meaning"? What
do we mean "allow to write the same logical inferences"? This
asks for a human intelligence to look at the symbols and make such
determination. This is in apparent conflict with the principle of a
formal system where the validity of mathematical proofs is verified
without using human judgment calls.
Gödel's
work brings a solution to this apparent conflict. Syntactic
translations between the different representations of the formal
system can be defined mathematically. Do you remember when Hilbert
pointed out that formal systems may be subject to mathematical
analysis? Here we are. Syntactic translations are such mathematical
analysis. Once you have shown mathematically that there is a
translation you can conclude that whatever the meaning is in the
original it will be preserved in the translation. This is called
reducibility25.
We say syntax is reduced to arithmetic when a translation from the
syntax to arithmetic is available.
Reducibility
is a concept that should be important to patent law. Abstract ideas
are not patentable subject matter. In order to prevent abstract ideas
from being indirectly patented, there are exceptions in the law that will
not allow patenting an "invention" made of printed text based on
the content of the text. The implication of Gödel's discovery is
that the symbols that represent an abstract idea need not be printed
text. The symbols can be something abstract like exponents of prime
numbers. They can be something physical like electromagnetic signals
or photonic signals. Symbols can take a variety of forms26.
Does the law exempt abstract ideas from patenting only when they are
written in text or does it exempt all physical representations? In
the latter case what is the test that can tell a physical
representation of an abstract idea from a patentable physical
invention? Mathematicians know of such a test and it is reducibility.
Enumerability
Gödel
numbers are also used to define computable enumerability. This is a
term of art for a mathematical concept of computation theory. Imagine
a list of all numbers that have a specific properties. For example
these lists are computable enumerable:
- The
even numbers: 0, 2, 4, 6 ...
- The
odd numbers: 1, 3, 5 ,7 ...
- The
prime numbers: 2, 3, 5, 7, 11 ...
Computable
enumerability occurs when you can test with a recursive function
whether the number has the property or not. You build your list by
running all the numbers 0, 1, 2 in sequence and run them against the
test. The first number that passes the test is the first on the list,
The next number that passes the test is the next one etc.
Thanks
to Gödel numbers, it is possible to define an arithmetical test for
the syntax of the definition of a recursive functions. Then we can
enumerate all the possible definitions. More specifically each
recursive function corresponds to the Gödel number of its
definition. What we enumerate is the Gödel numbers and this is the
same thing as enumerating the definitions themselves because the
Gödel numbers can be translated back into the corresponding text.
Now
consider the Church-Turing thesis that computer programs are the same
thing as recursive functions. This is not obvious now, but by the
time we are done with this article all the mathematical evidence will
have been presented to you. Would this be of consequence to the
patentability of software? Developing a computer program that fulfills
a specific purpose is mathematically the same thing as finding a
number that has some specific properties. Can you patent the
discovery of a number?
Another
way to look at it is that there is a list of all the possible
programs. We can make one by listing all possible programs in
alphabetic order27.
Any program that one may write is sitting in this list waiting to be
written. In such a case, could a program be considered an invention in
the sense of patent law? Or is it a discovery because the program is
a preexisting mathematical abstraction that is merely discovered?
This
idea that programs are enumerable numbers is supported by the
hardware architecture of computers. Programs are bits in memory.
Strings of 0s and 1s are numbers written in binary notations. Every
program reads as a number when the bits are read in this manner.
Programming a computer amounts to discovering a number that suits the
programmer's purpose.
Alonzo
Church's Lambda-Calculus
Lambda-calculus
is the second definition of effective method that uses mathematical
language, recursive functions being the first. I follow the plan of
explaining each of these definitions one by one and now is the turn
of lambda-calculus.
It
is the brainchild of Alonzo Church. Like Gödel he was doing research
in the foundations of mathematics but he used a very different angle.
Instead of working on the arithmetic of positive integers he was
trying to use the abstract concept of mathematical function as his
foundation.
Functions
occur in all areas of mathematics: arithmetic, geometry, boolean
algebra, set theory etc. Church was concerned with the properties of
functions that belonged to all types of functions independently of
the branch of mathematic. When Peano arithmetic uses zero and the
successor function as the most elementary concept, lambda-calculus
gives this role to the operations of abstraction and application28.
What
is abstraction? Do you remember in high school how shocking it was
when the teacher showed you for the first time an algebraic formula
that mixed up letters and numbers? How could you add a letter and a
number like in x+7? For most students this concept is hard to grasp.
People can understand 3+7 or 9+7 but x+7? What kind of number would
be the result of such addition? The point that was hard to learn is
abstraction. The point of writing x+7 is not to calculate an answer.
The point is to define a pattern, the act of adding 7 to something.
The pattern must be applicable to any number whatever it is. This is
the purpose of using a letter. The formula doesn't want to refer to a
specific number. The letter marks the place where a number belongs
but this number is unknown and must not be specified. This finding of
a pattern is abstraction.
Application
is the reverse of abstraction. It is what you do when you know that x
is the number 5 and that at last you can turn x+7 into 5+7.
To
put it simply, abstractions are when you make functions by stating how
they are calculated. Application is the actual use of the rule of the
function.
The
task Church set to himself is to develop a formal system based on
abstraction and application. These two operations require the
exercise of human intelligence. A formal system requires rules that
can be followed without the use of human ingenuity. How do you
reconcile the two? Church found a solution that involved breaking
down abstraction and application into more elementary steps.
He
handled abstraction with a syntactic trick. He required that the
variable being abstracted is explicitly identified with the Greek
letter λ. The expression x+7
would have to be written λx.x+7. Then you no longer need to use
intelligence to know that it is an abstraction and that x is the
marker for what is being abstracted. This information is written in
the formula.
Application
is expressed by juxtaposition. If you decide that x is 5 you would
write (λx.x+7)5. Parentheses are used to group symbols when how they
read may otherwise be ambiguous. This example means the abstraction
x+7 is applied to the value 5. At this point the formal system would
want you to look at the variable after the λ, in this example it is
x, and you would replace it with the expression it is applied to, 5
in this example. Then (λx.x+7)5 is transformed into 5+7. This
operation is called beta-reduction29.
This is a fancy name Church has cooked up to mean what a modern word
processor would call a search and replace operation.
This
technique allows multiple levels of abstraction. For example consider
the concept of addition. It can be expressed as an abstraction of
both arguments λyλx.x+y. An expression like this would capture the
very concept of addition independently of the values being added.
Also each value can be applied separately. For example
(λyλx.x+y)7 becomes after beta reduction λx.x+7. Functions like
λyλx.x+y that produce another abstraction are called high order
functions. A complete computation would require to apply beta
reduction repeatedly until no abstractions are left. For example
(λyλx.x+y)(7)(5) would turn into 5+7 after two beta reductions.
The
combination of the λ notation and the beta reduction operation
allowed Church to define his lambda-calculus. That is a formal system
that captures the properties of abstraction and application. The
repeated use of beta reduction until no abstraction is left is an
effective method that is used to perform all computations in this
system.
There
is a twist. My examples used arithmetic operations because every one
understands elementary arithmetic. This helps make the concepts
clear. But the goal of Church was to study functions independently of
any branch of mathematics. He did not include arithmetic operations
in lambda-calculus30.
Surprisingly this doesn't matter. Abstraction and application alone
are sufficient to let you count. And then you can use recursion to
reconstruct arithmetic from this basis.
Here
is how you may represent numbers in lambda-calculus.
- λsλx.x
- λsλx.s(x)
- λsλx.s(s(x))
- λsλx.s(s(s(x)))
Remember
Peano arithmetic when we identified the numbers by counting the
apostrophes? Now we count the occurrences of s. The writing system is
different but the idea is the same.
If
you find yourself scratching your head at stuff like λsλx.s(x)
wondering what it actually means this is normal. You need to be
skilled at lambda calculus to make sense out of this kind of formula.
I am not giving you a course in lambda calculus. There is no way you
can learn how to read this language from the explanations I am giving
you.
The
point I am making is there is a translation from the concepts of 0
and successor numbers to other concepts that belong to
lambda-calculus. The result of the translation is less important than
the fact that there is a translation at all. The reason is that if
there is a translation then mathematicians may invoke the principle
of reducibility. The implication is that any computation31
that can be expressed in the language of Peano arithmetic can also be
expressed with the language of lambda-calculus.
The
existence of a translation is why mathematicians consider that
recursive functions and lambda-calculus are equivalent. All
computations that can be done by means of recursion can be translated
into an equivalent computation made with lambda calculus. And
conversely the language of lambda-calculus can be transformed into
arithmetic by means of Gödel numbers. There is no computation that
one of the languages can do that the other can't do.
There
is a fundamental difference between Gödel numbers and this
translation. Gödel numbers are translating the symbols of the formal
language one by one. Therefore you have a mirror image of the text of
the language hidden in the exponents of prime numbers. The
translation into lambda-calculus is different. It translates not the
symbols but the numbers themselves into functions. You have a mirror
image of the numbers hidden into an enumeration of functions.
This
technique of translation is called modeling. The mirror image in
lambda-calculus of Peano arithmetic is called a model.
Imagine
a geometric pattern like a circle. It remains a circle whether it is
a wheel, a plate or a jar cover. You can see the pattern is the same
by comparing the shapes. But how do you define "the same shape"
mathematically? You don't eyeball the object and call them the same.
You verify that all of these objects have a boundary where every
point is at the same distance from the center. This is the
mathematical definition of a circle and everything that meets the
definition is a circle.
Now
imaging the whole formal system is an abstract pattern made of
relationships between formulas. The axioms and inference rules of the
formal system are the equivalent of the definition of the circle.
Everything that obeys these same rules is matching the definition of
the pattern. The translation is proof that the model obeys the rules
of Peano arithmetic. Therefore the abstract pattern that is
characteristic of the arithmetic of positive integers is found within
lambda-calculus32.
Models
work a bit like a mock up. In the olden times, engineers designing a
car would build a mock up before building a real car. The examination
of the mock up helped them finding faults in their design before
incurring the expense of making the real thing. The model of
arithmetic is a mathematical mock up except that unlike the car mock
up, the model has the full expressive power of the original.
The
Church-Turing Thesis
Alonzo
Church was so impressed by the power of Gödel numbers that he argued
every effective method would be reproducible by some recursive
function based on a Gödel number encoding of the effective method.
This idea implies that recursive functions, lambda-calculus and
effective methods are equivalent concepts.
This
is known as the Church Thesis. It didn't convince everyone when it
was first formulated. A more convincing formulation had to wait until
the discovery of Turing machines. Nowadays a revised version of the
Church Thesis that adds Turing machines to this list of equivalent
concepts is accepted by mathematicians under the name of the
Church-Turing thesis.
A
Universal Algorithm
In
Peano arithmetic each recursive function makes a different effective
method. The rules to carry out the computation are provided by the
definition. In lambda-calculus the situation is different. There is
only one effective method that is used for all computations. This
effective method is to keep doing beta reduction until there are no
applications left that could be reduced.
Since
lambda-calculus has the capability to perform every possible
computation (in the extensional sense) then one algorithm is all one
needs to perform every possible computation33.
The difference between one computation and another is in the data
that is supplied to the algorithm. This is one of the reasons we
argue that software is data.
Lambda-calculus
can be and has been implemented as a programming language. The
language LISP is a prominent
example.
This
raises an interesting possibility. Suppose I am sued for infringement
on a software patent. I may try defending myself arguing I did not
implement the method that is covered by the patent. I implemented the
lambda-calculus algorithm which is not covered by the patent. And
even if it did, lambda-calculus is old. There is prior art. What
happens to software patents then?
Anyone
that argues in favor of method patents on software must be aware that
the current state of technology permits this possibility.
Extensionality
Consider
two of these idealized humans that are required to carry out
effective methods. One of them is carrying out, say, a multiplication
using Peano arithmetic. The other does the same multiplication using
lambda-calculus. If we would compare their writings on paper, they
will be different. The formal languages are not the same. the symbols
are not the same and one follows the rules of recursion directly
while the other follows them indirectly through repeated beta
reduction. How can it be said the two computations are the same?
This
is a very important question because it speaks to what constitutes
"is the same" from a mathematician's perspective. For example when
one argues software is mathematics, the argument is that the
computation done by the computer is the same thing as a mathematical
computation, therefore this is mathematics that is being done by the
machine. Such argument can be understood only when one knows the
mathematical meaning of "being the same" computation.
This
question can be answered two different ways. There is the
"intensional" definition and the "extensional" definition34.
The
intensional definition considers the details of how the computation
is done. If you change the details of the computation, from the
intensional perspective you get a different computation even though the
two computations always produce the same answer. By this perspective
recursive functions and lambda-calculus are different.
The
extensional definition ignores the details of the calculation and
considers only the equivalence by means of reducibility. If the two
calculations can be matched with a translation from one language to
another, then they are the same from the extensional perspective. From the
extensional perspective recursive functions and lambda-calculus are
equivalent because every calculation made in one system is matched by
a calculation made in the other system..
The
law has a similar35
concept when it makes a difference between ideas and expressions of
ideas. For example consider I write a program. Some outside party
has written a similar program. This party sees my program and thinks
I infringe on his proprietary rights. There are two scenarios
depending on whether he makes his claim according to copyright law or
patent law.
For
claims of copyright infringement the text of the source code matters.
If I wrote my program independently and I can prove the texts are
different, I don't infringe on his rights. This is an intensional
point of view.
For
claims of patent infringement then differences in the text of the
source code won't matter. It won't even matter if the code is written
in a different programming language. If my program uses the same
method that is covered by the patent according to whatever legal test
of "same method" is applicable, I will infringe. This is an
extensional perspective.
Which
of the two perspectives is correct? It depends on the degree of
abstraction that is wanted. The intensional perspective is the less
abstract. It is the perspective you need when you study the
properties of the written expressions of the symbols. It may be used
to answer questions like how many steps are required to produce an
answer or how much space in memory is required to store the symbols.
The
extensional perspective is more abstract. It is the one you need when
you are concerned with the expressive power of the language. It
answers questions like: whether you can say the same things in both
languages or how to tell when two expressions in different
languages have the same meaning.
The
intensional vs extensional dichotomy explains the situation of the C
sorting program that is the same when run on different hardware
architectures. When you compile the code, you produce different
instructions depending on which hardware architecture is targeted.
These difference are significant only when you look at the code from
the intensional perspective. When you take the extensional
perspective, all executables produce the same output. Therefore they
are the same. The consequence is that software is abstract. The
details of the instructions don't matter36.
Denotational
Semantics
C
is a programming language, not a mathematical language. How do we
define its meaning mathematically? This is done using a technique
called denotational semantics37.
For
example consider the statement i=i+1 that could be written in many
programming languages such as C. It looks like an equation but it is
not an equation. Here the letter i is called a variable. Imagine a
box with a label. The box contains information. The variable is the
label for such a box. The statement i=i+1 means open the box labeled
i, get the number that is in the box, add 1 to it and put it back in
the same box. If there is no number in the box your program has a bug
and it will fail. The question is what is the mathematical meaning of
this operation.
Suppose
you run your program in your favorite debugger38.
You can stop the execution of the program just before the i=i+1
statement. Then you can use the debugger to inspect the memory of the
computer. What do you see? There are many variables in use. Each of
them is a labeled box containing some information. You may have a
variable called "FirstName", another called "LastName",
another one called "Salary" and one named "HiringDate". The
memory is assigning each of these names a value. This is a
mathematical function. It doesn't happen to be one you calculate
because the values are explicitly stored and not computed with a
formula or anything. It is a mathematical function nonetheless.
Now
use the debugger to execute the i=i+1 statement and stop the
execution again. What do you see in the memory? None of the variables
have seen their values changed except i which has been incremented by
one. The memory is a new mathematical function that is identical to
the previous one except for the change in the value of i. This is the
meaning of the statement. When you consider the content of the memory
as a function, the statement is a high order function that changes
this function into another function.
This
is how denotational semantics works. Lambda-calculus is good at
handling high order functions. You build a mathematical model of the
memory content of the computer using lambda-calculus. Then each
statement in the language can be translated into a high order
function that manipulates this model. The result is a definition of
the mathematical meaning of the program.
This programming
statement is discussed in a brief to the Supreme Court of the United
States by Professor Hollaar and IEEE-USA. When arguing that software
is not mathematics, the brief states39:
But in most instances, the correspondence between computer
programs and mathematics is merely cosmetic. For example, the
equation E = MC2 expresses a relationship between energy and matter
first noted by Einstein, while the computer program statement E = M *
C ** 2 represents the calculation of M time C raised to the second
power and then assigning the result to a storage location named E. It
is unfortunate for purposes here that the early developers of
programming language made their calculation-and-assignment statements
look like mathematical equations so that they would seem familiar to
scientists and engineers. But the common programming statement I = I
+ 1, which increments the value stored in location I, is essentially
nonsense as a mathematical equation. Similarly, a computer program is
a series of calculation-and-assignment statements that are processed
sequentially, not a set of simultaneous mathematical equations that
are solved for their variables.
This statement from the
brief is incorrect. The correspondence between a statement in a high
level language such as C and mathematics is not defined by syntactic
similarity. It is defined by denotational semantic. There is another
correspondence that involves machine instructions. This
correspondence will be discussed when we reach the topic of Turing
Machines.
Programming
Directly from Mathematical Languages
If
programming languages have a mathematical meaning what stops
programmers from writing their programs using a mathematical language
in the first place? The answer is it can be done. Languages such as
LISP in the functional family of
languages implement lambda-calculus and can be used to write programs
using a mathematically defined notation. Programmers prefer languages
in the imperative family40
of languages because the resulting programs is closer to the hardware
architecture of the CPU and gives them more control on how the
algorithm will execute. This situation may change. The trend is to
increase the number of CPUs that are put on a chip. In presence of
multiple CPUs imperative language becomes more difficult to use. The
more CPUs, the harder it gets write a program that runs well on the
computer. Functional languages are more amenable to work well in this
kind of environment.
There
is another approach. One may take advantage of the relationship
between algorithms and logical proofs. Consider this parable.
This
is 1963. It is the heart of the Cold War. The head of CIA rushes into
the office of his main scientific advisor.
CIA Head: We have
received a report from James Bond. The Soviets have developed a new
formula that makes our strategic defense obsolete. It will cost us
billions to upgrade. But the report says it could be disinformation,
that the new theory may be something phony to make us spend billions
uselessly. We cannot afford to guess. We need to know. Can you use a
computer to find out if the formula is true of false?
Chief Scientist: Sure
I do. I can write a program that prints "True" whatever the
input is and another one that prints "False" whatever the
input is. One of the two programs is bound to provide the correct
answer. Therefore I can use a computer to print the solution you
seek.
There
are two major schools of mathematicians. The classical school would
say the answer of the chief scientist is logically correct although
inappropriate. One of the two programs does indeed print the correct
answer and the chief scientist can indeed write it. Therefore the
chief scientist has literally answered the question that has been
asked.
The
intuitionistic school will protest that the chief scientist's answer is
illogical. You can't argue you can write the requested program until
you can tell which program is the one you seek.
The
parable has been designed to make it sounds like the intuitionistic
point of view is the correct one. In this context the whole point of
the question is to know which of the two programs provides the answer. I
wanted to show that intuitionistic logic has merits. Such logic
requires building a tie between a proof that a problem is solvable
and the construction of an effective method that actually solves the
problem. Their point is that if you can't actually solve the problem, how
can you argue a solution exists?41
If
you work out this tie using the language of lambda-calculus you get
something called the Curry-Howard correspondence. It is a
mathematical translation that takes a proof and turn it into an
algorithm in the language of lambda-calculus or vice-versa. The
implication is you can write a mathematical proof and use an
automatic process to extract the built-in algorithm and compile it
into an executable program. Or conversely you write a program and the
compiler verifies that it is bug-free by extracting a proof of its
correctness from the code or reporting an error if the proof doesn't
work out. This is a topic for research. The mathematical principles
are known but their application is not ready for use outside of the
research laboratory yet.42
Despite this limitation some specific programs that have been
verified in this manner have been developed.43
A
question for patent law is what do you do when this research comes to
fruition? At this point there will be no actual difference between
the work of computer programmer and the work of a mathematician. How
would you patent software without directly patenting mathematics?
The
merging of programming and mathematical proofs has industrial
applications. [See Campbell
for an example.] If you can prove a program is mathematically correct
you verify that its logic will perform according to the intent of the
programmer. This eliminates a large quantity of bugs.44
Several
formal verification methods exist outside of the Curry-Howard
correspondence. One of the most popular is called Hoare logic45.
It is a formal system where every statement in a programming language
corresponds to a rule of inference. Programmers annotate their code
with mathematical propositions and verify that the rules of
inferences are obeyed. If they do, the program is proven
mathematically correct.
Alan
Turing's Turing Machines
Turing
machines are the third of the major mathematical definitions of
computable functions. They were proposed specifically to resolve an
open question with the first two definitions. Alonzo Church put forth
the thesis that Gödel numbers were powerful enough to reproduce
every effective method that could be. But at the time the evidence
was intuition. It looked like Gödel numbers have that power but how
do you prove it? Alan Turing's work brought convincing evidence.
The
definition of effective method requires a human being doing the work
with pencil and paper. Turing proceeded from an analysis of what such a
person is able to do. He eliminated all redundant possibilities until
he was left with what is essential.46
Let's look at this elimination process.
Do
we need to write things in columns or otherwise arrange symbols in
two dimensions? This is not necessary. One can do the same work by
writing everything on the same line and visualizing how it fits two-dimensionally. It is inconvenient, but since the task is to find out
the essential minimum, writing things in two dimensions can be done away with. This suggests that regular sheet paper is not necessary. The
paper may be in the form of a long tape where symbols are written
sequentially. The human winds the tape back and forth as he works
with the information.
How
long the tape must be? Our idealized human who carries out the
effective method never runs out of time and paper. The tape has to be
infinitely long to reflect this capacity.
A
human often needs to refer to information already written. How fast
does he need to shuffle paper to find out what he wants? Moving the
tape one symbol at the time is sufficient. If you need to go farther
away you can repeat moving the tape one symbol until you get where
you want to be.
When
reading the information, how many symbols do you need to look at at the
time? The answer is one. If you need to look at more symbols you can
examine them one by one.
When
faced with making a choice, on what basis would our person
make his decision? Choices are made on the basis of the symbols
being read and on his state of mind. He must track where he is
in the execution of the effective method. This tracking is done by
changing his state of mind. Then, when faced with a choice, the human
will consider his state of mind and the symbol being read and make
the next move as is required by the method. This implies we don't
need to delve into human psychology or the biology of the brain. All
we need to know is (1) that there are states of mind and (2) that the
human changes his states as he tracks where he is in the execution of
the effective methods.
How
many states of mind are required? Only a finite number is required.
They can be inventoried by analyzing the rules of the method and
finding the junctures where choices are made. This means that if we
write down the inventory of states of mind we may represent them with
a finite number of symbols.
These
considerations are summarized in the list of capabilities below. They
are the minimal requirements to perform any effective method.
- There
is an infinitely long tape where symbols can be recorded
sequentially.
- The
tape has a "current" position where a symbol can be read. This
is the current symbol.
- A
symbol may be written at the current position overwriting any symbol
already there.
- The
tape may be moved one symbol position either to the left or to the
right. It may also stand still.
- The
human carrying out the method has a number of distinct possible states of
mind.
- The
states of mind are finite in number and may be represented with
a finite number of symbols.
- The
human changes his state of mind based on the previous state and the
current symbol on the tape.
The
implication is that it is possible to describe every effective method
in these terms. The rules for carrying out the method are written as
a list of quintuples. This is a list of line items. Each of them is
made of five elements of information.
- The
current state of mind that is required for the rule to apply.
- The
current symbol that is required for the rule to apply.
- The
symbol that is written on the type when the rule applies. If this is
the same symbol as the one mentioned in item 2, then this is a do-nothing operation.
- Whether
the tape moves right, left or stands still after writing the symbol
if the rule applies.
- The
state of mind one must adopt after the application of the rule.
This
process is transforming the broad scope of effective methods into
something that has a standardized form. You analyze the original
method rules, translate them into a suitable list of quintuples, and you
are all set.
Now
imagine that our idealized human carrying out the method chooses to
use an idealized typewriter instead of pencil and paper. The
typewriter writes on an infinite tape that can be moved left or right
one symbol at a time at the press of a key. The user may see the
current symbol or overwrite it with a new symbol by the press of a
key. There is a blackboard on the wall where the list of quintuples
is written. The user uses his current state of mind and the current
symbol to select the applicable quintuple, and then he performs the
actions dictated by the quintuple. He repeats this process until it is
over. This human is doing the same calculation as if he were carrying
out the original effective method.
Now
imagine you could endow the typewriter with the ability to read the
symbols and track its internal state by itself. The quintuples could
be mechanized, allowing the typewriter to carry out the calculation
all by itself, without the help of an actual human. What you get when
you do this is a Turing machine.
The
Evidence for the Church-Turing Thesis
Now
we have closed the loop. Effective methods are reduced to Turing
machines. But by definition a Turing machine is an effective method
because the human typing on the keyboard of the idealized typewriter
might just as well use pencil and paper. Therefore the equivalence between
effective methods and Turing machines is a two-way street.
Turing
machines can be translated into recursive arithmetic with Gödel
numbers. Recursive arithmetic can be translated into lambda-calculus
with the method we already saw. Turing found a set of rules for a
Turing machine that can do the beta reduction algorithm of
lambda-calculus. The cycle is complete. All three definitions of
computable functions may be reduced to each other. The argument in
favor of the Church-Turing thesis is now complete.
Quite
a lot of evidence of the Church-Turing Thesis has been found beyond
what has been presented here. An article on the Church-Turing
Thesis on the Turing Archive summarizes it as follow:
Much
evidence has been amassed for the 'working hypothesis' proposed by
Church and Turing in 1936. Perhaps the fullest survey is to be found
in chapters 12 and 13 of Kleene (1952). In summary: (1) Every
effectively calculable function that has been investigated in this
respect has turned out to be computable by Turing machine. (2) All
known methods or operations for obtaining new effectively calculable
functions from given effectively calculable functions are paralleled
by methods for constructing new Turing machines from given Turing
machines. (3) All attempts to give an exact analysis of the intuitive
notion of an effectively calculable function have turned out to be
equivalent in the sense that each analysis offered has been proved to
pick out the same class of functions, namely those that are
computable by Turing machine. Because of the diversity of the various
analyses, (3) is generally considered to be particularly strong
evidence. Apart from the analyses already mentioned in terms of
lambda-definability and recursiveness, there are analyses in terms of
register machines (Shepherdson and Sturgis 1963), Post's canonical
and normal systems (Post 1943, 1946), combinatory definability
(Schonfinkel 1924, Curry 1929, 1930, 1932), Markov algorithms (Markov
1960), and Godel's notion of reckonability (Godel 1936, Kleene 1952).
This
same article also makes a point of what the
Church-Turing thesis does not say. The thesis says Turing
machines are equivalent to computations made by human using effective
methods. It doesn't say they are equivalent to any possible
computation made by a machine. The reason is that the idealized
typewriter argument doesn't discuss equivalence with other machines,
and therefore we don't have a basis to reach such a conclusion.
The
article provides a list of published articles where mathematicians
have constructed abstract computing machines that are not equivalent
to Turing machines. These calculations cannot be done by a human and
are not effective methods. Nobody has constructed a physical device
that implements any of these machines. It is unknown whether you can
build a physical computing device that can perform a computation a
Turing machine can't do.
The
Working Principle of the Modern Computer
What
if we design an effective method that directs the idealized human to
read the quintuples of a Turing machine and do as they tell him to
do? What if we make a Turing machine out of this method? You get a
universal Turing machine. Here "universal" means that it can do the work
of all other Turing machines.
There
are two lists of quintuples here. The universal Turing machine uses a
list of quintuples to define its activity like any other Turing
machine. If you supply this machine with a tape where the quintuples
of another machine are written, the universal machine will emulate
the behavior of the other machine.
This
is a major discovery in the history of computing as the Stanford
Encyclopedia of Computing points out:
Turing's
construction of a universal machine gives the most fundamental
insight into computation: one machine can run any program whatsoever.
No matter what computational tasks we may need to perform in the
future, a single machine can perform them all. This is the insight
that makes it feasible to build and sell computers. One computer can
run any program. We don't need to buy a new computer every time we
have a new problem to solve. Of course, in the age of personal
computers, this fact is such a basic assumption that it may be
difficult to step back and appreciate it.
This
is the point where computation theory stopped being the exclusive
province of mathematicians and became relevant to engineers. The
question was how do you build an actual universal Turing machine?47
Today we know the answer. We use electronics. The tape corresponds
to the computer memory. The table of quintuples corresponds to the
CPU instruction set. The CPU itself has the ability to change
states. The Turing machine you want to emulate is programming data
that you load in memory for execution. When we assemble the pieces, we
have a modern digital computer.
When
making an actual computer, engineers had to cross the line that
separates the abstract world of the mathematician from reality. The Turing
machine is an abstraction suitable to provide a foundation. Do you
remember what we said about foundations when discussion Peano
arithmetic? They are not meant to be used directly. You need to build
an edifice on top of them to obtain a usable result. The engineers
had to build the edifice.
Moving
a tape one symbol at the time is inefficient. You need to be
able to get the information directly. You need to access memory
randomly.
There
are many types of storage devices with different characteristics:
RAM, ROM, Flash, magnetic disks. optical disks etc. You need to mix
and match them as required. Where the Turing machine needed one
storage device, the tape, in real life computer need many.
The
Turing machine can do only one operation: it writes one symbol on the
tape based on the appropriate quintuple. This is too rudimentary for
real-life computing. An actual computer needs an elaborate
instruction set.
You
get the idea. The Turing machines provided the starting point but
engineers had to elaborate on the concept. The result is what is now
known as the Von Neuman architecture.48
Why
do we still think computers are Turing machines after these
engineering changes? The answer depends on if you look at an
intensional definition or an extensional definition. Mathematicians
didn't stop working and reducibility didn't go away. They studied the
mathematical effect of changes in the architecture of the Turing
machine.49
What if the machine has more than one tape? What if the tapes are
replaced with random access memory? The answer is these machines can
be reduced to Turing machines. You don't bring into existence new
computations by doing these changes. From an extensional perspective
the modified machines perform the same computations as a plain Turing
machine.
There
is one difference that has an effect. The idealized human never runs
out of time and paper. This is why the Turing machine has an infinite
tape. But in the real world, there are physical limits. Memory is not
infinite, and time is limited. This restricts the computations
that are possible. A real-life computer is an approximation of a
Turing machine. Whether or not the calculations it carries out
corresponds to one performed by a Turing machine depends on whether
or not this calculation will fit within the physical constraints. If
it doesn't fit, the calculation will not be carried out to completion
even though the Turing machine has the theoretical ability to do it.
But when a computation fits within the constraints, it is always the
same as one that is done by a Turing machine.
The
Nature of Software
Let's
recapitulate. A universal Turing machine is one that, when given the
quintuples of another Turing machine, will do as instructed. This
principle is implemented in an actual computer as follows:
- The
tape of a Turing machine corresponds to the memory of a computer.
- The
read/write capability of a Turing machine corresponds to the memory
bus of the computer.
- The
changing states of the Turing machine corresponds to the changing
states in the CPU electronics.
- The
quintuples of a universal Turing machine correspond to the CPU
electronics.
- The
quintuples of the other Turing machine correspond to the computer
program.
The
last bullet is an important one for patent law. It says software is
the information you need to give to the universal Turing machine to make
it perform the desired computation. This simple fact has
consequences.50
Software
is data.
This
is because information is synonymous with data. When you look at a
physical computer, software is stored in a data file like any other
form of data. It is loaded in memory and stored in exactly the same
chips of RAM as data. The CPU manipulates the software with the same
instructions as it uses to manipulate other forms of data. The fact
that software is data is the mathematical principle that makes it
possible to build and sell modern digital computers.
Software
is abstract.
This
is a consequence of how Turing machines are constructed. A computer
program is equivalent to the work of an idealized human being
carrying out an effective method. By definition this work is an
abstraction written down on paper. Another way to look at it is that a
computer program is equivalent to some Turing machine. If we apply
the discoveries of Alan Turing, we never need to physically build
this Turing machine. It is sufficient to find a description of it and
pass it to a universal Turing machine. The only device we need to
actually build is the universal Turing machine. The computer is the
universal Turing machine. The software is a description of an
abstract machine that is never physically implemented.
Software
is mathematics.
This
is because all the instructions the CPU is able to execute are
mathematical operations. Physical computers are implementations of
universal Turing machines. Universal Turing machines are part of
mathematics. Therefore, whatever they do is mathematics. Another way
to look at it is software is the description of some abstract Turing
machine. Since all Turing machines are part of mathematics software
is mathematics
Software
is discovered as opposed to invented.
This
is a consequence of the fact that software is abstract and software
is mathematics. You can't invent abstract ideas and mathematics. You
can only discover them.
What
is Not Computable?
Computable
functions are defined through an exercise in reducibility. Whatever
is reducible to a Turing machine is computable. The flip side of the
question is, what cannot be reduced to a Turing machine?
Alan
Turing discussed this question in his doctoral thesis.51
He studied what he called an oracle machine, which is a Turing
machine augmented with the capability to ask a question to an oracle
when the computation reaches a certain point. Then the oracle answers
the question and the computation resumes, taking this information into
consideration.
The
oracle could be anything.52
For example, imagine our idealized human typing on his idealized
typewriter doing calculations about sunspots. Every now and then, he
picks up the phone and asks an astronomer the percentage of the
surface of the sun that is currently covered with sunspots. The
astronomer looks into the telescope and provides the answer. Then our
idealized human resumes typing on the idealized typewriter
carrying out the rest of the computation. The astronomer is the
oracle. Observing the sun is not a computation even if the result of
the measurement is a number. Therefore the whole process including
the astronomical observation is not something that could be done by a
Turing machine alone.
A Turing
oracle machine brings the concept of computations relative to the
oracle. This means part of the process is a computation and part is
not. It is possible to draw a line between the two by sorting out
what is the oracle and what is the Turing machine.
Consider
that instead of a human being typing at the typewriter, we have a
computer that interfaces directly to the telescope. It takes
photographs of the sun and computes the surface of the sunspots from
the photos. This computer is doing mathematical calculations relative
to the photographs, but taking the picture is not a mathematical
operation. This kind of integration with non-computational devices is
where the boundaries of computation theory lie.
This
very issue has been contemplated by the Supreme Court of the United
States in two celebrated cases, Parker
v. Flook and Diamond
v. Diehr.
In
Parker v. Flook the court ruled on how mathematical algorithms should
be treated by patent law:
Respondent's
process is unpatentable under §
101 not because it contains a mathematical algorithm as one
component, but because once that algorithm is assumed to be within
the prior art, the application, considered as a whole, contains no
patentable invention. Even though a phenomenon of nature or
mathematical formula may be well known, an inventive application of
the principle may be patented. Conversely, the discovery of such a
phenomenon cannot support a patent unless there is some other
inventive concept in its application.
In
Diamond v. Diehr the court looked at the flip side of the situation:
In
contrast, the respondents here do not seek to patent a mathematical
formula. Instead, they seek patent protection for a process of curing
synthetic rubber. Their process admittedly employs a well-known
mathematical equation, but they do not seek to preempt the use of
that equation. Rather, they seek only to foreclose from others the
use of that equation in conjunction with all of the other steps in
their claimed process. These include installing rubber in a press,
closing the mold, constantly determining the temperature of the mold,
constantly recalculating the appropriate cure time through the use of
the formula and a digital computer, and automatically opening the
press at the proper time. Obviously, one does not need a "computer"
to cure natural or synthetic rubber, but if the computer use
incorporated in the process patent significantly lessens the
possibility of "overcuring" or "undercuring," the
process as a whole does not thereby become unpatentable subject
matter.
These
two cases address this question: when is an invention an
application of a mathematical algorithm as opposed to being the
algorithm itself? With knowledge of computation theory, we may say the
answer must have to do with the interactions with the real world.
Everything that is the symbolic manipulations done by a CPU or
equivalent devices is a mathematical computation. But when the device
interacts with the outside world, whether or not you have a
patentable invention, according to those cases, depends on the nature of this interaction. This
is what we learn from Turing's oracle machine.
Why
Lawyers Need to Know Computation Theory
What
is a mathematical formula when implemented in digital electronics?
What is an application of the formula as opposed to the formula
itself? These questions have confounded lawyers for decades for good
reason. These are hard questions. But they are the questions they
have to answer when software patents are discussed.
For
this type of question, guidance from mathematicians is appropriate.
This is why patent lawyers should learn computation theory. This is
especially important in software because the relevant mathematical
concepts are hard. It took decades for the best mathematicians of the
first half of the twentieth century to figure them out. Their
findings are among the greatest intellectual achievements of their
time. It is appropriate to stand on the shoulders of giants because
this is not the sort of thing anyone can figure out by himself merely by
pondering dictionary definitions and thinking things through. But
many courts have been trying to do just that in software patents
cases. For example consider this sentence from In
re Alappat:
We
have held that such programming creates a new machine, because a
general purpose computer in effect becomes a special purpose computer
once it is programmed to perform particular functions pursuant to
instructions from program software.
In
a single sentence the court tosses out the fundamental principle that
makes it possible to build and sell digital computers. You don't need
to create a new machine every time you perform a different
computation; a single machine has the capability to perform all
computations. This is what universal Turing machines are doing.
In
the 1940s, the ENIAC used to be programmed by physically
reconnecting circuits. This design was awkward. Engineers took pains
to eliminate this need and designed the next generations of computers
by using the ideas of universal Turing machines as guidance. The
explicit goal was to make sure you don't need to make new circuitry,
much less make a new machine, when you program the computer. This
history
is documented on the Stanford Encyclopedia of Philosophy.53
See also this
article on Groklaw.
How
could the court make such a holding? It is because it didn't know
about computation theory. And I suspect it didn't know because no one
ever put an explanation of computation theory into the court record.
Let's
look at another example. This one is from Application
of Walter D. BERNHART and William A. Fetter. It discusses a
patent on a software algorithm to plot lines based on Cartesian
coordinates. In this case, several issues where computation
theory is relevant show very clearly. The first one is about
determining what is a method made of mental steps:
Looking
first at the apparatus claims, we see no recitation therein of mental
steps, nor of any element requiring or even permitting the
incorporation of human faculties in the apparatus. These claims
recite, and can be infringed only by, a digital computer in a certain
physical condition, i. e., electro-mechanically set or programmed to
carry out the recited routine. The claims also define the invention
as having plotting means for drawing lines or for illustrating an
object. When such functional language is used in a claim, 35 U.S.C. §
112 states that "such claim shall be construed to cover the
corresponding structure, material, or acts described in the
specification and equivalents thereof." The specification here
mentions only mechanical drafting machines. The claims therefore
cover, under section 112, only such mechanical drafting machines and
their equivalents. We know of no authority for holding that a human
being, such as a draftsman, could ever be the equivalent of a machine
disclosed in a patent application, and we are not prepared to so hold
in this case. Accordingly, we think it clear that applicants have not
defined as their invention anything in which the human mind could be
used as a component.
...
In
the case now before us, the disclosure shows only machinery for
carrying out the portrayal process. In fact it is the chief object of
the invention to eliminate the drudgery involved in a draftsman's
making the desired portrayals. Accordingly, a statutory process is
here disclosed. Looking then to method claim 13, we find that it in
no way covers any mental steps but requires both a "digital
computer" and a "planar plotting apparatus" to carry
it out. To find that the claimed process could be done mentally would
require us to hold that a human mind is a digital computer or its
equivalent, and that a draftsman is a planar plotting apparatus or
its equivalent. On the facts of this case we are unwilling so to
hold. We conclude that the method defined by claim 13 is statutory,
and its patentability must be judged in light of the prior art.
Imagine
how different this passage would have been if the court knew about
Turing machines and their equivalence with effective methods. Would
the idealized human being typing on his idealized typewriter count as
proof that the claims are made of mental steps? Or would a court
hold that the very same steps are mental steps only when they
are carried out by an actual human? In this case, the court clearly
looked at the problem from an intensional perspective and took note of
the difference between actual humans and electronic devices. But what
if the extensional perspective is the correct one? This is the way
mathematicians would look at it. Without knowledge of computation
theory, the court has no way even to take the mathematical view into
consideration.
This
Bernhart case is really fascinating. It is a concentrate of some of
the hardest issues where computation theory is relevant. Here is
another extract that speaks to what constitutes a symbol:
Nor
are the "printed matter" cases, cited by the board, supra,
controlling as to these apparatus claims either on the facts or in
principle. On their facts, those cases dealt with claims defining as
the invention certain novel arrangements of printed lines or
characters, useful and intelligible only to the human mind. Here the
invention as defined by the claims requires that the
information be processed not by the mind but by a machine, the
computer, and that the drawing be done not by a draftsman but by a
plotting machine. Those "printed matter" cases therefore
have no factual relevance here.
What
if the judge had factored in the equivalence of a Turing
machine and effective methods? He would have understood that the
information stored in memory is symbolic in nature. He would have
understood that there is no difference between a mental step of a
human being and the computations of the computer. He would have
understood that the symbols written in computer memory have the same
meaning as those written on paper. Besides, some humans can read
information meant to be processed by machines. There are tools like
debuggers that have been made expressly for this purpose.
Let's
take this from another angle. What kind of files contain
instructions that cause the computer to perform calculations? Here are
a few examples. This list is by no means complete:
- Executable
binaries: these files contain instructions that are executed
directly by the CPU. These files are the result of the compilation
of source code. When legal cases discuss the programming of a
computer, they think almost exclusively of these files. I never see
an analysis of how the case reasoning would extend to other kinds of
instructions.
- Interpretable
code: these files contain programs in human-readable form that may
also be executed directly by the computer without an intermediate
compilation step. This requires a special program called an
interpreter that reads the code and executes the instruction. Several
programming languages are interpreted. This court didn't consider
the possibility that the patented algorithm could be implemented in
such a language. Would such code infringe? But in this case it is an
invention made of a particular arrangement of characters that is
intelligible to the human mind. Would this change the assessment of
the court regarding the factual relevance of the "printed matter"
cases?
- PDF
files: these files contain instructions to be executed by a virtual
machine that are specialized to the visual rendering of written
documents. How do the printed matter court cases apply to patents
written to algorithms made of the execution of such instructions?
The
distinction between symbols readable by humans and machine-readable
information is artificial. Computation theory teaches us that symbols
remain symbols whether they are ink on paper or recorded in
electronic form. In both cases they can be used to implement the same
abstract processes.
Let's
take this same idea from still another angle. Suppose a court
determines that software in its executable binary form is patentable
because it is turned into something physical. This is an argument we
see frequently. For purpose of this discussion it doesn't matter if
the court thinks the physical something is a machine, a process or
whatnot. The question is what happens to the patentability of code in
forms other than executable binaries?
- The
court doesn't want to patent data like a novel or music. Therefore
the physical something that makes binaries patentable will not be
sufficient to make data patentable.
- The
court doesn't want to patent textual data in a PDF file no matter
how novel and nonobvious the text may be. Such files are made of
executable instructions. Therefore it must take more than the
presence of novel and non obvious instructions that are executed to
make something patentable.
- How
about patenting source code in textual form? This will raise free
speech issues. How can you discuss code if you can't write the text
without infringing? The text of source code should not be
patentable.
- How
about code written in an interpreted language? This code is never
compiled into executable binaries. the only executable binary is the
interpreter. This is the same code that runs for all programs. It
does not implement the patented algorithm and by itself it doesn't
infringe. The interpreter implements the language. The text of the
source code (which is not patentable) is given to the interpreter.
The interpreter reads the text and does what it is told. All data is
manipulated by the interpreter on behalf of the program. On what
basis should this code be patentable? The physical something that is
the basis of patentability never comes into existence. The
interpreter doesn't infringe. The data is not patentable. The source
code is not patentable. Executing instructions doesn't make
patentable something that isn't.
This
analysis is meant to communicate that software is about manipulating
symbols. It is not about physical circuitry or processes. Whatever
perceived circuitry is used to claim software is patentable,
programmers have the means to make an end-run around this alleged
circuitry. It is possible to arrange the manipulation of the symbols
in such a way that the circuitry that is the alleged basis of
patenting software never comes into existence.
We
have already encountered a theme like this when we discussed the beta
reduction algorithm of lambda-calculus. One algorithm can do the work
of every other algorithm. We can make sure the method patents are
never infringed because the claimed methods are never physically
implemented. Interpreted languages are built on this basis.
From
the same Benrhart case, there is a section that is reminiscent of the
"software makes a new machine" from Alappat:
There
is one further rationale used by both the board and the examiner,
namely, that the provision of new signals to be stored by the
computer does not make it a new machine, i. e. it is structurally
the same, no matter how new, useful and unobvious the result. This
rationale really goes more to novelty than to statutory subject
matter but it appears to be at the heart of the present controversy.
To this question we say that if a machine is programmed in a certain
new and unobvious way, it is physically different from the machine
without that program; its memory elements are differently arranged.
The fact that these physical changes are invisible to the eye should
not tempt us to conclude that the machine has not been changed. If a
new machine has not been invented, certainly a "new and useful
improvement" of the unprogrammed machine has been, and Congress
has said in 35 U.S.C. § 101 that such improvements are statutory
subject matter for a patent. It may well be that the vast majority of
newly programmed machines are obvious to those skilled in the art and
hence unpatentable under 35 U.S.C. § 103. We are concluding here
that such machines are statutory under 35 U.S.C. § 101, and that
claims defining them must be judged for patentability in light of the
prior art.
These
physical changes are the normal operation of the computer. It would
probably not have occurred to this judge to rule that turning the
steering wheel to the left is an improvement to a car that is
patentable subject matter and could be barred from patenting only
because it is old and obvious. But this is a physical change that is
applied to the car, and this change does give the car the ability to
turn left. It is a useful improvement of the car based on the logic
of this paragraph.
Machines
have moving parts. Moving the moving parts as they are intended to be
moved is not an invention. What are the moving parts of a computer?
This information is in the definition of the Turing machine. The
moving parts include the ability to read and write symbols in memory.
More, the operating principle of the computer requires that programs
are stored in memory and are modifiable by the computer as the
execution of the program progresses. The changes to the memory are
not an improvement of the computer. They are the operation of the
computer as it is intended to be operated. The knowledge of the
Turing machine enables one to make this determination.
What
if we hold the other view? Suppose we agree with this court that
writing something in memory is a patentable improvement of the
computer? Then we get absurdities. Modern computers change the state
of their memory billions of times per second. Do we have billions of
improvements per second, each of them being patentable subject
matter? How does one perform due diligence to avoid patent
infringement then?
Looking
only at memory changes that concern programming instructions doesn't
help much. Such instruction is data held in memory and may be
changed as fast as the speed of memory permits.
Let's
take a look at some of the instances where the actions of a user may
cause a new program to be loaded in memory.
- You click on a menu item.
- You visit a web site and Javascript is
embedded in the page.
- You visit a web site and a Flash or Java
applet is downloaded.
- Your operating system downloads a security
patch.
- You use the add/remove application menu
option of your Ubuntu system to get application from the web.
- You open a PDF file. The PDF document is a
series of instructions written for execution by a virtual machine.
- You open a text document that contains
macros.
- You open a spreadsheet that contains
formulas. Solving formulas in a spreadsheet counts as an algorithm,
doesn't it?
- You use a program that generates and executes
(or interprets) code on the fly. This includes many circumstances when your
program accesses a relational database using SQL, because this language
does not specify the algorithm in the code. The algorithm is
determined dynamically by the language interpreter. Since SQL is the
dominant standard for databases this situation covers most programs
that use databases.
From
a user perspective these activities are the normal operation of the
computer. It is not reasonable to ask the users to conduct a patent
search for due diligence against infringement every time they perform
one of these actions. This cannot be the correct interpretation of
patent law.
Software
is Mathematics
The
preceding section was a series of case studies explaining why lawyers
should know computation theory. Now it is time to return to the
questions that started this section. What is a mathematical formula
when implemented in digital electronics? What is an application of
the formula as opposed to the formula itself? We already know the
answer. Software is mathematics. Everything that is executed by a CPU
is mathematics. The boundaries of mathematics lie in the interaction
of the computer with the outside world.
This
idea is a hard one to understand unless you know computation theory.
When you don't know computation theory, all sorts of misconceptions
occur. For example patent attorney Gene Quinn has written an
argument against software being mathematics.
Computation theory is conspicuous by its absence in this article. He
is an eloquent man. Someone who doesn't know computation theory will
likely be convinced. If you bring computation theory into the picture, however,
his argument falls apart.
I
think this quote summarizes the core of his position:
I
think the disagreement between me and my critics is largely due to
the way patent law views mathematics, which seems admittedly quite
different from the way that many computer programmers view
mathematics.
It
is impossible to argue that software code does not employ
mathematical influences, because it does. I think Dijkstra's
explanation that a "programmer applies mathematical techniques"
is completely true and accurate. Having said this, the fact that
mathematical techniques are employed does not as a matter of fact
mean that software is mathematical. Under the US patent laws you
cannot receive a patent that covers a mathematical equation or a law
of nature. You can certainly use mathematical equations and laws of
nature as the building blocks to create something that is new and
nonobvious that is patentable. So even if software used mathematical
equations there would be no prohibition against the patenting of
software under a true and correct reading of the US patent laws.
If I paraphrase the
argument, Quinn thinks there are some mathematical equations or
methods on one side and the programmer uses them to write software on
the other side. Then he says "Look, the maths are over there. This
is not the same as the software over here." Of course when couched
in these terms, software is different from mathematics. For example if
someone write a program simulating chemical reactions, the laws of
chemistry are mathematically defined and they are different from the
software. Or similarly if one uses mathematical influences to reason about
the code, the influences are not the code.
The error is that the
mathematics of computation theory is not the mathematics Quinn
is pointing to. You can't understand what is meant by "software is
mathematics" unless you understand computation theory.
What does it take for
the law to find as a matter of fact that software is mathematical?
Do the opinions of mathematicians and computer scientists trained in
mathematics counts? Or only lawyers?
Since Quinn
mentioned Dijkstra, let's explain what are the mathematical methods
he advocated.54
He wanted the programmers to use Hoare logic to formally prove the
programs before they are compiled and run for the first time. As we
have said, Hoare logic is a formal system wrapped around an
imperative programming language. Here is what Hoare himself says in
the introduction of the seminal
paper where he first proposed his logic:
Computer
programming is an exact science in that all the properties of a
program and all the consequences of executing it in any given
environment can, in principle, be found out from the text of the
program itself by means of purely deductive reasoning. Deductive
reasoning involves the application of valid rules of inference to
sets of valid axioms. It is therefore desirable and interesting to
elucidate the axioms and rules of inference which underlie our
reasoning about computer programs. The exact choice of axioms will to
some extent depend on the choice of programming language. For
illustrative purposes, this paper is confined to a very simple
language, which is effectively a subset of all current
procedure-oriented languages.
When one writes
programs using this method, the program is a by-product of writing a
mathematical proof. Software is not "employing mathematical
influences" to use the words of Quinn. The statements in the
program are providing the applicable inference rules that are used
to write the proof. When you are done with the proof, all that is left to
do to get actual running code is insignificant post-solution
activity. You take the code that has been proven and compile it.
When software is
written is this manner, it is as mathematical as anything could be.
But if this is not sufficient to convince a court, how about software
developed using the Curry-Howard correspondence? This improves on the
above methodology in that there is no difference between the text of
a mathematical proof and the text of the program. You can compile the
proof directly to get executable code.
I will admit not all
software is developed in such manner. In fact most programmers never
use such methods. Does the way software is developed count
for patent infringement determination? Do we need to make a
difference between software developed mathematically which is not
patentable and software which is developed using a non-mathematical
method which is patentable? Perhaps once it were decided that formal
verification techniques bring the code into a patent-free haven, these
methods would grow more popular. The industry could avoid billions of
dollars in patent infringement liabilities. A side-effect would be
great improvements in the reliability of software due the the drastic
reduction in the number of bugs. But in such scenario what would be
the point of software patents?
We
don't need to consider such hypotheticals. The activity of the
computer itself is mathematics no matter how the source code is
written. There the argument goes:
- Is
the activity of a human carrying out an effective method
mathematics? Mathematicians think so.
- Is
the activity of the same human typing on an idealized typewriter
performing the same effective method mathematics? This is the same
work using a different tool.
- If
we automate the idealized typewriter to work by itself without a
human, is it doing mathematics? Again this is the same work being
done.
- If
we implement the idealized typewriter in actual real-world
electronics, is the computer doing mathematics? You get the picture
here. If the human at the start of the chain is doing mathematics,
so does the computer.
We
may also ask, how about denotational semantics? It is a translation of
the code into the language of lambda-calculus. It doesn't matter
whether the programmer uses a mathematical methods or not when
developing software. The source code has a mathematical meaning
regardless of how the developer wrote it.
Now
contrast the above with this quote from Quinn:
Returning
to the matter of software generally, the logic employed by the
computer programmer is certainly of a mathematic nature, but that is
quite different than saying that the code created by the programmer
is a mathematical algorithm. Now I am speaking in generalities here,
but what I am trying to address is whether all computer software
should be considered unpatentable subject matter. I see absolutely no
justification for all software to be considered unpatentable subject
matter because it is simply not correct to say that software code is
the equivalent of a mathematical equation or a mathematical
algorithm. Employing the same logical structure is certainly wise,
and complies with best practice standards for programming, but at the
core computer software directs. The code is a series of instructions
written using mathematical logic as its foundation. In the patent
arena this does not and cannot mean that the patenting of software is
the equivalent of patenting mathematics. It merely means that the
instructions are written in a language and format that are heavily
influenced by mathematics.
This whole paragraph
betrays a complete lack of knowledge of computation theory. From
an extensional perspective, software is data that represents the
quintuples of a Turing machine. This is a mathematical entity. Or if
you prefer to take an intensional view and consider the specifics of
the hardware architecture, the instruction set of the CPU is a
machine of some sort based on a computational model that is different
from but still reducible to a Turing machine. This machine is
universal in the sense that it can perform any computation provided
it is given a symbolic description of this computation. Software is
this description. Both the universal machine and the description of
the computation are mathematical entities. The patentable physical
devices are the actual CPU as it is etched in silicon and the
computer, including memory, buses, motherboard etc. When the computer
computes, it is doing mathematical work exactly like a human being
working with pencil and paper.
We can find the root of
Quinn's misconceptions in this quote:
The
influence mathematics has on programmers is largely or perhaps even
solely related to work-flow. Mathematics teaches us how to manage a
problem by going through the steps to solve the problem in a
predictable and traceable manner. By employing the same type of
thoughtful, step-by-step approach to writing code the programmer can
manage write segments of code and tie everything together into an
overall structure that will deliver the desired functionality.
Contrast
this view with the use of Hoare logic or Curry-Howard correspondence.
Contrast this view with denotational semantics. Contrast this view
with the fact that the computer itself is executing an effective
method. The role of mathematics in software is not limited to
work flow and debugging. Without knowledge of computation theory there
is no way to understand how deep the rabbit hole goes.
There is one more
interesting argument in this quote:
I
just cannot reconcile in my mind why employing lessons learned from
mathematics would, could or ought to render software unpatentable. If
you take this logic to its extreme you are left with absolutely
nothing being patentable, which I suspect is the goal of some, but
certainly not all, of those who would rather software not be
patentable subject matter. Again, it is undeniable that mathematics
is the backbone to virtually everything. Virtually everything can be
explained using mathematical models. If you combine mathematics and
physics there is virtually no mechanical, electrical or chemical
phenomena that cannot be explained in a descriptive way. So does that
mean that mechanical, electrical and chemical inventions ought not to
be patentable?
Here we are looking at
a two-way street. It is correct that we can describe just about everything
mathematically. It is correct that such descriptions don't make
anything unpatentable. But we can also use something physical to
describe mathematics. In such case patenting the physical
representation is patenting mathematics. We don't patent mathematical
textbooks. Quinn's error is he is looking at one side of the
street without considering the other. This is understandable
considering the fact he is not looking at the same mathematics as we
are discussing.
When we want to invoke
this argument to dismiss the use of mathematics and claim
the physical reality is patentable, we have an obligation. We need to
make sure we correctly understand what part of mathematics is
involved, what is the corresponding physical entity, and how they
relate. If mathematics describes the physical entity, then the
physical entity may be patentable. But what if things go the other
way round? In the case of software the analysis goes like this:
- When a human being
writes something mathematical with pencil and paper, does the text
describes mathematics? Or does the mathematics describe the text?
The text describes mathematics of course.
- When the human uses
a typewriter to write the same mathematics, the written text still
describes the maths. The maths don't describe either the text or
the typewriter.
- If we automate the
typewriter to execute an effective method, the relationship between
the text and the typewriter doesn't change. The text describes the
maths. The maths don't describe the automated typewriter nor the
text.
- If we build an
electronic device that does the same calculations as the automated
typewriter but with symbols encoded electronically instead, does
their meaning change? Of course not, the symbols still mean the
mathematics. The maths don't describe the computer or anything
physical in the computer. The maths don't describe the software
either.
As an concluding
remark, did you notice how I have used reducibility to track where
the meaning of symbols lies? This is exactly the sort of thing
mathematicians have used reducibility for. This sort of thing is
exactly why mathematicians prefer an extensional view over an
intensional one to track abstract concepts across different
representations.
References
Cases
Application
of Walter D. BERNHART and William A. Fetter.
417 F.2d 1395
Bilski v. Kappos, Amicaus Brief
of Professor Lee A. Hollaar and IEEE-USA.
Diamond
v. Diehr, 450
U.S. 175, 182 (1981)
In
re Alappat,
U.S. Court
of Appeals Federal Circuit, July 29, 1994
Parker
v. Flook 437
U.S. 584 (1978)
On
the Web
[Agda]
Agda: an
interactive proof editor
[Campbell]
MacGregor Campbell, Universal
kernel code to keep computers safe,
from New Scientist,
[Cayenne]
Cayenne
hotter than Haskell
[CLISP]
GNU CLISP home page.
[Compcert]
Compcert compilers you can
formally trust
[Coq]
The Coq proof assistant
[Dictionary]
Dictionary
of Algorithms and Data Structures
[Dijkstra]
On the cruelty of really teaching computer science (manuscript)
(html)
[Epigram]
Home page of the epigram project
[Groklaw]
Correcting
Microsoft's Bilski Amicus Brief -- How Do Computers Really Work?
[Metamath]
Metamath Home Page
[Quinn]
Groklaw
Response: Computer Software is Not Math
[Turing
Archives] The Turing Archives
for the History of Computing.
This
site is maintained by Jack
Copeland, professor at the University of Canterbury, New Zealand
[Stanford]
The Stanford Encyclopedia of
Philosophy
In
Print
[Davis
2000] Davis, Martin, Engines of Logic, Mathematicians and the
Origin of the Computer, W.W. Norton and Company, 2000. This book
was originally published under the title The Universal Computer:
The Road from Leibnitz to Turing.
[Gödel
1986] Gödel, Kurt, 1986, Collected Works,
vol. 1, Oxford: Oxford University Press.
[Hoare
1969] Hoare, Charles Anthony Richard, An
Axiomatic Basis for Computer Programming, Communications of the
ACM, October 1969 pp. 580
[Kleene
1950] Kleene, Stephen Cole, Introduction to Metamathematics, D. Van
Nostrand Company, 1950
[Soare
2009] Turing Oracle Machines, Online Computing, and Three
Displacements in Computability Theory, Annals
of Pure and Applied Logic ,
to appear in volume of Proceedings of Computability in Europe, Siena,
2007 meeting. Available
on-line [Link] In a private email Dr
Soare informs me this has been published from Publisher Elsevier.
However I don't have the actual publication title. I have used the
on-line version.
[Stoy
1977] Stoy, Joseph E. Denotational Semantics: The Scott-Strachey
Approach to Programming Language Theory. The MIT Press, 1977
[Turing
1936] Turing, Alan, On Computable Number with and Application to the
Entscheidungsproblem, Proceeding of the London Mathematical Society,
ser. 2, vol 42 (1936), pp. 230-67. Correction: vol 43 (1937) pp.
544-546. This
paper could be ordered from the publisher here [Link]
This paper is available on-line here [link]
[Turing
1939] Turing, Alan, Systems of logic based on ordinals, Proceeding of
the London Mathematical Society, vol 45, Part 3 (1939) pp. 161-228.
This paper is available on-line here [Link]
Footnotes
1 I
believe the rulings in Benson, Flook and Diehr by the Supreme Court of the United States.
all reached conclusions that I consider to be
technically correct.
2This
is why the Stanford Encyclopedia of Philosophy is one of the
authorities I will quote. See [Stanford]
4 Prior
to the invention of the electronic digital computer, the word
"computer" referred to humans doing calculations. See the
Brief
History of Computing at the Turing archives [Turing].
5 The
decimal expansion of the number π has an infinite number of
decimals. No matter how fast we calculate them, the calculation will
never end. Therefore the definition of what constitutes a
mathematical calculation must not be contingent on the practical
limitations of a human doing the calculation.
6 This
text was written before the Supreme Court ruling on Bilski v. Kappos. I
don't know if the ruling will affect the accuracy of this sentence.
7 Some
of this history is summarized in chapter III of [Kleene 1950].
Another account that goes way over my head in the mathematical
details may be found in the article on
Paradoxes
and Contemporary Logic of the Stanford Encyclopedia of
Philosophy. See also the article on Russell's
paradox.
8 Martin
Davis traces this idea back to Leibnitz (1646-1716). See [Davis
2000]
9 In
this section on formal systems I dig deeper into
the connection between algorithms and abstract thinking. Computation
theory has very close ties with the very definition of what is a
mathematical proof, and I am exploring some of that here. I am laying
some of the ground work I need to explain why software is discovered
and not invented, why software is abstract and why software is
mathematics. T
10 Inference
is a fancy word that here means deduction.
11 A
complete list of these rules can be found in chapter IV section 19
of [Kleene 1950]
12 This
means mathematicians can't argue their theorem with a mesmerizing
tap dance and hope nobody will follow up and check it, as sometimes lawyers seem to do
in litigation. Other mathematicians will never look at how plausible
the argument may sound. They will check each inference one by one
and make sure every single one of them follows the rules. As soon as
they find a single inference that doesn't clearly follow the rules,
the whole proof is ruthlessly rejected.
13 The
consequence of this fact is there is no dispute among mathematicians
on whether a theorem is proven or not. They check the proof and
either it passes the test or it doesn't. This fact may be put to
productive use by lawyers. In any issue where mathematical truth is
relevant, theorems that are brought into the court record will not
be disputed by anyone competent to do so.
14 This
is why you have axioms for arithmetic, other axioms for geometry,
yet other axioms for set theory etc. The intuitive truths at the
foundation of any mathematical discipline are different. The axioms
are chosen to reflect these truths.
15 This
was called by Hilbert metamathematics which means mathematics about
mathematics. See [Kleene 1950]
16 This
requirement further expands on the role effective methods play in
Hilbert's program. However this part of the program cannot be done.
There are theorems that show that effective methods to solve all
mathematical questions in a consistent formal system suitable to be
the foundation of all of mathematics are not mathematically
possible. However it is possible to develop algorithms of a more
limited scope that will automatically discover the proof of some
theorems. According to the article on Automated
Reasoning of the Stanford Encyclopedia of Philosophy, these
algorithms have reached the point where they can be used by
researchers to attack open questions in mathematics and logic and to
solve problems in engineering. This is further evidence of the tie
between computer programs and abstract thinking.
17 There
is a risk of circular reasoning in this logic. If the mathematical
analysis of the formal system is itself subject to paradoxes, then
the proofs envisioned by Hilbert would be unreliable and nothing
would be really solved. This is why Hilbert further required that
the proof of consistency and completeness must be done through
finitary means. The reason is all the paradoxes that plagued
mathematics during Hilbert's time were located in parts of
mathematics that were dealing with infinity in one way or another.
Hilbert argued the proofs of consistency, completeness and
decidability should only use parts of mathematics that are
considered safe from paradoxes and therefore could be relied on.
18
See chapters IV to VIII of [Kleene 1950] for the development and
analysis of a formal system sufficient to form the basis of the
arithmetic of integers. Most of mathematics can be based on a formal
system called the Zermelo-Fraenkel
set theory.
19 A
very readable account of this history can be found in [Davis 2000]
20 This
is where dictionary definitions of algorithm are lacking. For
example the Dictionary of Algorithms and Data Structures
[Dictionary] defines algorithm as "a computable set of steps to
achieve a desired result." Literally the definition is correct.
But it doesn't capture nuances like degrees of abstraction such as
the one illustrated in the example of the C sorting program. Lawyers
working from dictionary definitions without knowledge of computation
theory have no chance of really understanding algorithms.
21 In
mathematics a pair of mathematical objects is also a mathematical
object.
22 This
explanation is based on the simplest form of recursion called
primitive recursion. Effective
methods are equivalent to recursion in its full generality. See
[Kleene 1950] for an complete explanation. See also the article on
Recursive
Functions in the Stanford Encyclopedia of Philosophy.
23 Gödel
has provided a full translation of a formal system called Principia
Mathematica in his seminal paper On formally undecidable
propositions of Principia Mathematica and related systems. The
original was in German and published in 1931. An English translation
is available in [Gödel 1986] pp 144-195.
24 Two
of these consequences are the famous Gödel undecidability theorems.
They proved that Hilbert's program is impossible to fulfill because
when a formal system is powerful enough to serve as the foundation
of Peano arithmetic, then consistency and completeness are mutually
incompatible. A system that is consistent cannot be complete and a
system that is complete cannot be consistent. These theorems are
outside the scope of this text.
25 Reducibility
is a term of art in computation theory.
26 It
can be said that the bits in computer electronics are numbers.
Textual symbols are represented using numeric encodings like ASCII
or Unicode. The original Gödel numbering system is not friendly to
modern electronics and more convenient encodings were developed.
This doesn't change the substance of the argument being made. The
bits are symbols that represent something abstract.
27 We
can make such a list by listing all series of one symbol A, B .. Z,
then all series of two-symbols AA, AB ... ZZ and then series of
three symbols, four symbols etc., and we filter out those series of
symbols that are not valid programs according to the rules of syntax
of the programming language. Whatever program someone writes, it
must be on this list.
28 Abstraction
and application are terms of art specific to lambda-calculus.
29 There
is also an alpha-reduction that let you rename a variable. For
example (λx.x+7) may be
rewritten (λy.y+7)
without changing the meaning of the abstraction.
30 Nowadays
this is called "pure" lambda-calculus. There are variants of
lambda-calculus that augment lambda-calculus with operators from
other branches of mathematics.
31 This
translation is provided in detail in chapter 5 of [Stoy 1977] among
other places. Note that I said the computations can be translated.
This means the recursive functions are translated, and this is the
point that is of interest to computation theory. The method I am
referring to doesn't have the capability to reproduce predicate
calculus which is an integral part of Peano arithmetic. Therefore
this part of Peano arithmetic is not included in the model.
32 For
those of you who want a taste of how modeling works, I give some more
explanations. A model required to reproduce all the base concepts and
then demonstrate the translations obeys the same logical rules as the
original.
We have seen how zero is translated.
We also need a translation for the successor function; this is the
apostrophe in Peano arithmetic. The translation is λzλsλx.s(zsx).
Why this cryptic formula? It is because when you apply it to one of
the formulas that are used to translate numbers and perform beta
reduction you will obtain the translation of the next number. Let's
see an example of how it works.
The formula below applies the
successor to zero. After beta reduction you should get 1.
(λzλsλx.s(zsx))(λsλx.x)
Now we need to perform beta reduction
on z to resolve the application.
λsλx.s((λsλx.x)sx)
Within the parentheses, we have a pair
of successive applications (λsλx.x)sx..
In situation like this the rules of lambda-calculus say you must
perform the two beta reductions one at a time. The first
substitution requires you to replace s with s in λx.x.
But there is no s in there. This is just a do-nothing operation that
removes what is unnecessary.
λsλx.s((λx.x)x)
The next beta reduction requires to
replace x for x. Again this is a do-nothing operation the removes
more that is unnecessary.
λsλx.s(x)
This result is indeed -- the intended
translation for the number 1.
As you can see, these kinds of models
are not intended to provide an efficient method to do arithmetic.
They are intended to study the expressive power of formal languages
and gain understanding of how abstract concepts relate to each other.
33 There
exist an infinite number of algorithms that may serve this purpose.
As noted above, the native algorithm of lambda-calculus is too
inefficient to be put into practical use. It can be improved into
something that can be used in practice. Languages such as LISP make
use of the improvements.
34 I
have found this distinction in Soare [2009] p 13. This is a slightly
different concept from extensionality as used in set theory.
35 I
write "similar" concept and not "identical" concept. The law
will consider issues like derivative works that have no equivalent in
mathematics.
36 This
fact contradicts the point of view of those that want to patent
software on the basis that transistors and other signals are
changing state in memory. This is an intensional detail that is
absent from both the extensional properties of the software and the
extensional test for patent infringement. If the test for
infringement is extensional, then the invention subject matter must
be defined extensionally.
37 For
a more detailed explanation, see for example [Stoy 1977]
38 A
debugger is a programming tool that lets a programmer verify the
execution of a program and inspect the computer memory as it
executes. Its purpose is to help find bugs and understand
the error so it can be corrected.
39 See
Bilski v. Kappos, amicus brief of Professor Lee A. Hollaar and IEEE-USA.
40 Imperative
languages are made of statements that translate directly into
instructions that can be executed by a CPU. These languages are well
suited for programs meant to be executed into single CPU computers.
They are popular because single CPU computers have dominated the
market for so many years. Much of the understanding of programming
that lawyers have developed is based on compiled imperative
languages. This understanding often doesn't apply when other types
of languages are considered. Languages can also be interpreted, or
they can be something that is not imperative at all. It is dangerous
to make case law based on compiled imperative languages without
considering how the underlying logic apply to other types of
languages.
41 The
classical view is you are allowed to argue by contradiction. You
assume a solution doesn't exist and prove a contradiction. Therefore
some solution must exist somehow, even if you don't actually know
what it is. Intuitionistic logic disallows this form of reasoning.
43 The
Compcert compiler is an
example. From the Compcert page: "The Compcert verified compiler
is a compiler for a large subset of the C programming language that
generates code for the PowerPC processor. The distinguishing feature
of Compcert is that it has been formally verified using the Coq
proof assistant: the generated assembly code is formally guaranteed
to behave as prescribed by the semantics of the source C code.".
44 Namely
it eliminates all bugs that are due to the actual code not matching
the intent of the programmer as expressed in the proof. It doesn't
eliminate the bugs due to the programmer intending to do the wrong
thing, it does not eliminate the bugs due to the programmer proving
something that doesn't mean what he thinks it means, and it does not
eliminate problems due to the hardware not working as expected. But
still this is a very large category of bugs that is gotten rid of.
45 This
method has been first proposed by C.A.R. Hoare in [Hoare 1969]
46 An
account of this story can be found in [Davis 2000] chapter 7.
Turing's original analysis is found in [Turing 1936] section 9.
47 A
summary
of this history can be found in the Stanford Encyclopedia of
Computing. This is also the topic of chapters 7 and 8 of [Davis
2000]. The Turing Archive brief
history of computing reports the following:
Turing's
computing machine of 1935 is now known simply as the universal
Turing machine. Cambridge mathematician Max Newman has remarked that
right from the start Turing was interested in the possibility of
actually building a computing machine of the sort that he had
described (Newman in interview with Christopher Evans ('The Pioneers
of Computing: an Oral History of Computing, London Science Museum,
1976)).
During
the Second World War, Turing was a leading cryptanalyst at the
Government Code and Cypher School, Bletchley Park. Here he
became familiar with Thomas Flowers' work involving large-scale
high-speed electronic switching (described below). However, Turing
could not turn to the project of building an electronic
stored-program computing machine until the cessation of hostilities
in Europe in 1945.
See also
from the same article Turing's work with Colossus and ACE.
48 The
Stanford Encyclopedia of Philosophy describes
the contribution of John Von Neuman like this:
In 1944,
John von Neumann joined the ENIAC group. He had become ‘intrigued'
(Goldstine's word, [1972], p. 275) with Turing's universal machine
while Turing was at Princeton University during 1936–1938. At the
Moore School, von Neumann emphasised the importance of the
stored-program concept for electronic computing, including the
possibility of allowing the machine to modify its own program in
useful ways while running (for example, in order to control loops
and branching). Turing's paper of 1936 (‘On Computable Numbers,
with an Application to the Entscheidungsproblem') was required
reading for members of von Neumann's post-war computer project at
the Institute for Advanced Study, Princeton University (letter from
Julian Bigelow to Copeland, 2002; see also Copeland [2004], p. 23).
Eckert appears to have realised independently, and prior to von
Neumann's joining the ENIAC group, that the way to take full
advantage of the speed at which data is processed by electronic
circuits is to place suitably encoded instructions for controlling
the processing in the same high-speed storage devices that hold the
data itself (documented in Copeland [2004], pp. 26–7). In 1945,
while ENIAC was still under construction, von Neumann produced a
draft report, mentioned previously, setting out the ENIAC group's
ideas for an electronic stored-program general-purpose digital
computer, the EDVAC (von Neuman [1945]). The EDVAC was completed six
years later, but not by its originators, who left the Moore School
to build computers elsewhere. Lectures held at the Moore School in
1946 on the proposed EDVAC were widely attended and contributed
greatly to the dissemination of the new ideas.
Von
Neumann was a prestigious figure and he made the concept of a
high-speed stored-program digital computer widely known through his
writings and public addresses. As a result of his high profile in
the field, it became customary, although historically inappropriate,
to refer to electronic stored-program digital computers as ‘von
Neumann machines'.
50 These
consequences are in addition to the other arguments supporting the
same conclusions that are scattered throughout this article.
52 Turing
assumed the oracle would be some kind of mathematical function.
Given a physical computer it is easy to extend the concept to forms
of input that are outside the scope of pure mathematics. Computers
can be connected to physical measurement devices.
53 See
also [Davis 2000] chapter 8.
54For
instance see On
the cruelty of really teaching computer science. (html) See
[Dijkstra] for a link to a scan of the original manuscript.
Before we
part, I would like to invite you to consider the following way of
doing justice to computing's radical novelty in an introductory
programming course.
On the one
hand, we teach what looks like the predicate calculus, but we do it
very differently from the philosophers. In order to train the novice
programmer in the manipulation of uninterpreted formulae, we teach
it more as boolean algebra, familiarizing the student with all
algebraic properties of the logical connectives. To further sever
the links to intuition, we rename the values {true, false} of the
boolean domain as {black, white}.
On the
other hand, we teach a simple, clean, imperative programming
language, with a skip and a multiple assignment as basic statements,
with a block structure for local variables, the semicolon as
operator for statement composition, a nice alternative construct, a
nice repetition and, if so desired, a procedure call. To this we add
a minimum of data types, say booleans, integers, characters and
strings. The essential thing is that, for whatever we introduce, the
corresponding semantics is defined by the proof rules that go with
it.
Right from
the beginning, and all through the course, we stress that the
programmer's task is not just to write down a program, but that his
main task is to give a formal proof that the program he proposes
meets the equally formal functional specification. While designing
proofs and programs hand in hand, the student gets ample opportunity
to perfect his manipulative agility with the predicate calculus.
Finally, in order to drive home the message that this introductory
programming course is primarily a course in formal mathematics, we
see to it that the programming language in question has not been
implemented on campus so that students are protected from the
temptation to test their programs. And this concludes the sketch of
my proposal for an introductory programming course for freshmen.
|
|
Authored by: darksepulcher on Wednesday, November 11 2009 @ 08:25 PM EST |
Please file any corrections under this thread so PJ can find them with ease.
Thank you.
---
Had I but time--As this fell Sergeant, Death
Is strict in his arrest--O, I could tell you--
But let it be.
(Hamlet, Act V Scene 2)
[ Reply to This | # ]
|
- Typo - Authored by: PolR on Wednesday, November 11 2009 @ 10:16 PM EST
- Missing word - Authored by: PolR on Wednesday, November 11 2009 @ 10:27 PM EST
- Benrhart ? - Authored by: Anonymous on Thursday, November 12 2009 @ 12:36 AM EST
- Benrhart ? - Authored by: PolR on Thursday, November 12 2009 @ 12:48 AM EST
- Footnote 9 incomplete - Authored by: halfhuman on Thursday, November 12 2009 @ 02:50 AM EST
- Plural should be singular? - Authored by: halfhuman on Thursday, November 12 2009 @ 03:08 AM EST
- Singular to plural? - Authored by: halfhuman on Thursday, November 12 2009 @ 03:33 AM EST
- Plural to singular? - Authored by: halfhuman on Thursday, November 12 2009 @ 03:48 AM EST
- When you know the circle - Authored by: Anonymous on Thursday, November 12 2009 @ 06:02 AM EST
- does the text describes mathematics - Authored by: Anonymous on Thursday, November 12 2009 @ 06:03 AM EST
- "if someone write a program" --> "if someone writes a program" [N/T] - Authored by: Anonymous on Thursday, November 12 2009 @ 09:36 AM EST
- General Suggestions - Authored by: Anonymous on Thursday, November 12 2009 @ 11:10 AM EST
- Footnote 1 polarity? - Authored by: chissg on Thursday, November 12 2009 @ 02:17 PM EST
- Misstated definition? - Authored by: Anonymous on Thursday, November 12 2009 @ 02:24 PM EST
- Footnote 48 - Authored by: Alan(UK) on Thursday, November 12 2009 @ 02:40 PM EST
- Extensionality - Authored by: Anonymous on Thursday, November 12 2009 @ 10:50 PM EST
- can be done. --> can be done away with. - Authored by: Anonymous on Thursday, November 12 2009 @ 11:39 PM EST
- Typos and corrections - Authored by: Anonymous on Saturday, November 14 2009 @ 01:42 AM EST
- Corrections Thread - Authored by: grundy on Saturday, November 14 2009 @ 07:32 PM EST
- "Now imaging the whole formal system" -> "Now imagine the whole formal system" (n/t) - Authored by: clemenstimpler on Tuesday, November 17 2009 @ 09:03 PM EST
|
Authored by: darksepulcher on Wednesday, November 11 2009 @ 08:28 PM EST |
This thread is for discussions that have nothing to do with the above article.
Posting on-topic material in here will earn you the wrath of some nameless
entity who will proceed to haunt you in some nameless way.
---
Had I but time--As this fell Sergeant, Death
Is strict in his arrest--O, I could tell you--
But let it be.
(Hamlet, Act V Scene 2)
[ Reply to This | # ]
|
- CIO Blast from the Past: 40 years of Multics, 1969-2009 - Authored by: macrorodent on Wednesday, November 11 2009 @ 11:55 PM EST
- 4GB USB Flash Drive... What to put on it ? - Authored by: Anonymous on Thursday, November 12 2009 @ 02:11 AM EST
- Mono for Visual Studio - Authored by: TiddlyPom on Thursday, November 12 2009 @ 08:22 AM EST
- Shuttleworth responds to sexism allegations - Schroder apologies for "sexist twit" remark. - Authored by: SilverWave on Thursday, November 12 2009 @ 09:50 AM EST
- OT: AMD vs Intel - Authored by: Peter H. Salus on Thursday, November 12 2009 @ 10:08 AM EST
- Karmic 9.10 - nvidia issues - root cause - Authored by: SilverWave on Thursday, November 12 2009 @ 10:19 AM EST
- EU objects to Sun-Oracle deal - Authored by: Anonymous on Thursday, November 12 2009 @ 11:56 AM EST
- Groklaw needs a rating system - Authored by: Anonymous on Thursday, November 12 2009 @ 12:42 PM EST
- new language Google GO - Authored by: Anonymous on Thursday, November 12 2009 @ 01:47 PM EST
- "The best mitigation is to not use Flash" - Authored by: SpaceLifeForm on Thursday, November 12 2009 @ 08:53 PM EST
- Is this the best Microsoft can do? - Authored by: Anonymous on Thursday, November 12 2009 @ 11:00 PM EST
- Thanks for PDF -- done right! - Authored by: Anonymous on Monday, November 16 2009 @ 06:17 AM EST
|
Authored by: darksepulcher on Wednesday, November 11 2009 @ 08:30 PM EST |
Discussion of news picks goes here. Please mention the title of the news pick
you are discussing so everyone can follow along. Thank you.
---
Had I but time--As this fell Sergeant, Death
Is strict in his arrest--O, I could tell you--
But let it be.
(Hamlet, Act V Scene 2)
[ Reply to This | # ]
|
|
Authored by: Anonymous on Wednesday, November 11 2009 @ 08:40 PM EST |
I think I shall have to seek a patent for the method of long division. I can see
no intrinsic difference between software, algorithms, and the method of long
division.
[ Reply to This | # ]
|
|
Authored by: SLi on Wednesday, November 11 2009 @ 08:42 PM EST |
Also, it's now clear that just as you wouldn't go into litigation
without a lawyer, if you are wise, because that is their area of expertise,
lawyers also shouldn't assume they understand software and patents and how they
relate unless they get help from technology experts who know how software and
computers, and the underlying math, really work.
Yet the most
baffling "misunderstandings" usually come from the companies which are most
certain to have lots of talented people to explain this stuff to lawyers. Why?
Well, simply, because it's the job of, say, IBM or Microsoft lawyers to get the
result that IBM (read: the company, its execs, definitely not its
programmers and geeks) wants.
Of course, it will still help those IBM
lawyers if they really understand the subject matter, but I'm not convinced it
helps them a lot. It may help them prepare to answer the most obvious geek
responses to their position and to plan how to best steer the discussion (every
court has a limited time) so that the issues remain muddy enough to the judges
that their position stays plausible enough in the judges' eyes.
It would be
curious to know, though, how thoroughly the lawyers on the pro-swpat side really
understand the issue. My guess is that some do understand it quite well, but it
would be only the very best of them. [ Reply to This | # ]
|
|
Authored by: Anonymous on Wednesday, November 11 2009 @ 08:43 PM EST |
I haven't even gotten past the first few paragraphs, and I can already tell this
is one I'll reserve a lazy weekend afternoon reading and digesting. I'll be a
better person for it, and maybe even a better programmer. Thank you, PoIR.[ Reply to This | # ]
|
|
Authored by: Anonymous on Wednesday, November 11 2009 @ 08:54 PM EST |
The philosophy graduate part of me can't possibly let the statement "all
software is discovered and not invented" stand unchallenged. Please provide
reasonable contrasting definitions of "discovery" and
"invention" which allow both things to be possible and result in
"software" being classified as a discovery rather than an invention.
I suspect there's a relevant legal definition of "discover" vs
"invent" which trumps all other semantic quibbling in the current
context, but from a purely philosophical perspective I see the process of
invention as being a process of discovering techniques which work to achieve a
particular end. In that sense, all invention is discovery, but not all discovery
is invention. The art of designing a system (whether it be software or hardware,
electronic or mechanical) is invention.
[ Reply to This | # ]
|
- Discovery vs. Invention - Authored by: Anonymous on Wednesday, November 11 2009 @ 10:16 PM EST
- You misunderstand - Authored by: Anonymous on Thursday, November 12 2009 @ 12:34 PM EST
- Discovery vs. Invention vs. Creation in Mathematics - Authored by: leopardi on Wednesday, November 11 2009 @ 11:02 PM EST
- Discovery vs. Invention - Authored by: Anonymous on Wednesday, November 11 2009 @ 11:07 PM EST
- Discovery vs. Invention - Authored by: globularity on Wednesday, November 11 2009 @ 11:08 PM EST
- Discovery vs. Invention - Authored by: Anonymous on Thursday, November 12 2009 @ 01:51 AM EST
- Discovery vs. Invention - Authored by: Anonymous on Thursday, November 12 2009 @ 02:09 AM EST
- Discovery vs. Invention: What difference do you think it makes? - Authored by: Anonymous on Thursday, November 12 2009 @ 04:13 AM EST
- Discovery vs. Invention: What difference do you think it makes? - Authored by: Anonymous on Thursday, November 12 2009 @ 04:26 AM EST
- I TOLD PoIR not to leave this bit out! - Authored by: Ian Al on Thursday, November 12 2009 @ 06:29 AM EST
- Discovery vs. Invention - Authored by: Steve Martin on Thursday, November 12 2009 @ 06:42 AM EST
- An absolutely false premise - Authored by: Anonymous on Thursday, November 12 2009 @ 06:54 AM EST
- The process of discovering a program - patented! - Authored by: kenryan on Thursday, November 12 2009 @ 05:46 PM EST
- Discovery vs. Invention - Authored by: Anonymous on Thursday, November 12 2009 @ 10:47 PM EST
|
Authored by: Anonymous on Wednesday, November 11 2009 @ 09:14 PM EST |
This is a wonderful post. I only hope that the lawyers who see it will read and
try to understand ALL of it.
Sometimes software is data - i.e. information to be manipulated by the
computer.
Sometimes software is an instruction list - i.e. information that tells the
computer how to manipulate the information that is considered to be data.
Software changes "color" depending upon HOW it is used. This fact is
completely overlooked by the people who believe the it should be patentable.
Just like mathematics: software is sometimes used to compute things and
sometimes used to describe how things should be computed.
[ Reply to This | # ]
|
|
Authored by: rcweir on Wednesday, November 11 2009 @ 09:57 PM EST |
The one thing that nags me in the back of my head is this: Isn't the physical
universe itself essentially mathematics? Isn't math intrinsic to the physical
as well as the mental world?
One of the trends of the information age is the virtualization of things that
formerly were only done physically. For example, instead of underground nuclear
tests, today we simulate nuclear bombs on computers. We can design and simulate
the effectiveness of aircraft designs on computers. We can do chemical synthesis
experiments virtually. Even your average video game system today has a powerful
physics simulation on it.
So if we argue that software is not patentable because it is reducible to
mathematics (which itself is not patentable), then what happens when we find
that many or most physical inventions are also (in theory) reducible to
mathematics as well, to the extent they can be simulated by computers running
software?
Help me out here. What am I missing?[ Reply to This | # ]
|
- Lingering doubts - Authored by: PolR on Wednesday, November 11 2009 @ 10:18 PM EST
- Algorithms versus Machines - Authored by: jbb on Wednesday, November 11 2009 @ 11:36 PM EST
- You don't patent the idea, but the physical expression - Authored by: Anonymous on Thursday, November 12 2009 @ 04:30 AM EST
- Lingering doubts - Authored by: Anonymous on Thursday, November 12 2009 @ 05:31 AM EST
- Lingering doubts - Authored by: Anonymous on Thursday, November 12 2009 @ 06:37 AM EST
- Lingering doubts - Authored by: Anonymous on Thursday, November 12 2009 @ 06:58 AM EST
- Backwards. - Authored by: kenryan on Thursday, November 12 2009 @ 07:08 AM EST
- Lingering doubts - Authored by: ThrPilgrim on Thursday, November 12 2009 @ 07:55 AM EST
- computability is a piece of the puzzle. - Authored by: reiisi on Thursday, November 12 2009 @ 08:19 AM EST
- Lingering doubts - Authored by: Anonymous on Thursday, November 12 2009 @ 09:19 AM EST
- Lingering doubts - Authored by: Anonymous on Thursday, November 12 2009 @ 10:33 AM EST
- Lingering doubts - Authored by: Anonymous on Thursday, November 12 2009 @ 12:26 PM EST
- Math cannot fully describe the universe - Authored by: Anonymous on Thursday, November 12 2009 @ 06:10 PM EST
- Physical universe is not known mathematics. - Authored by: Anonymous on Thursday, November 12 2009 @ 06:57 PM EST
- Correction - Authored by: Anonymous on Thursday, November 12 2009 @ 07:00 PM EST
- Correction - Authored by: glimes on Saturday, November 14 2009 @ 05:53 PM EST
- Also.. - Authored by: Anonymous on Sunday, November 15 2009 @ 11:00 AM EST
- Lingering doubts - Authored by: Anonymous on Friday, November 13 2009 @ 12:22 AM EST
- The physical world is messier than a simulation - Authored by: Anonymous on Friday, November 13 2009 @ 02:20 AM EST
- Lingering doubts - Authored by: iraskygazer on Friday, November 13 2009 @ 04:45 AM EST
|
Authored by: Anonymous on Wednesday, November 11 2009 @ 10:07 PM EST |
I haven't read the whole thing. It seems really good as
far as I got. You really put a lot of work into it and it
shows. However, the only problem I have with this is not
one of content.
I guess that I have become jaded by what I have observed
of the various lawyer arguments presented here on Groklaw
and how they will do anything to help their clients,
including telling untruths and just plain making things up.
To get to the point, unless a lawyer would find it
necessary to be educated in computation theory for the sake
of their client, this will probably fall on blind eyes.
It seems that most will say whatever suits their clients
best interest and so far, appearing to be an ignorant
expert inorder to misrepresent the real facts seems to be
the norm.
All that said, I hope it gets read and understood, and can
be a positive force in getting lawyers to get see things
as they really are.
Best of luck and thank you.
[ Reply to This | # ]
|
|
Authored by: SLi on Wednesday, November 11 2009 @ 10:42 PM EST |
Now that I finally finished reading the text with thought, I must say it's very
good, even slightly enlightening also to me even if I already am a theoretical
computer science and computation theory geek. This is stuff I'm fairly familiar
with, but it was quite enlightening to see it explained so... simply. And I
appreciated the historical details, like the term "effective method",
which I hadn't heard about.
I only hope it's easy enough for a non-mathematical lawyer geek to understand. I
especially loved how the text "connects the dots" by analysing legal
cases and Quinn's arguments. I think I probably could have made each or most of
the arguments for "SW is maths" myself, but where PolR's (I guess)
"over 25 years of experience" show is the ability to explain these to
the layman so comprehensibly and relatively briefly and to connect them in such
an interesting and simple way.[ Reply to This | # ]
|
|
Authored by: gfim on Wednesday, November 11 2009 @ 10:43 PM EST |
Congratulations PolR on a wonderful and enlightening essay. Unfortunately, I
don't see it having any significant effect - people are being paid to not
understand these things!
On the subject of Gene Quinn and your
statement:Quinn's error is he is looking at one side of the street
without considering the other. This is understandable considering the fact he is
not looking at the same mathematics as we are discussing.
I think
is best illustrated by an example. Quinn's idea of the mathematics in software
is something like:
m = 100
c = 299792458
e = m * c *
c
print "Energy=", e
He is talking about the "e = m * c * c"
line as the maths involved. But, considering the computational theory part of
it, there is just as much maths in:
print
"hello"
Whereas, he would deny that there is any maths there at
all. Get him to see that the second contains maths too, and he'll be
convinced.
Of course, the cynic in me tells me that he won't ever see
it because he doesn't want to be convinced!
--- Graham [ Reply to This | # ]
|
|
Authored by: Anonymous on Wednesday, November 11 2009 @ 10:52 PM EST |
As a former maths competition "whiz kid" who subsequently moved on to
an unrelated career, but still loves to spend as much spare time as possible
immersed in mathematics and coding, I'm extremely impressed. I sincerely hope
this will be read by the Supreme Court (or are they supposed to be isolated from
outside info on active cases, like juries?)
I think the arguments of Quinn are pretty well annihilated by this essay. But I
don't think the patent lawyers really care whether software is mathematics. As
long as they can make money by claiming that software programs are inventions,
they will keep doing so, no matter what mathematicians and computer scientists
say about the topic.
In an adversarial system of justice, lawyers are fundamentally not trying to
arrive at the truth - they are trying to win. Our hopes rest with judges.[ Reply to This | # ]
|
|
Authored by: Anonymous on Wednesday, November 11 2009 @ 10:57 PM EST |
One of the best descriptions of the BASIC operations a computer uses was
provided by Theodor Nelson in "Computer Lib"... which, really, should
be required reading for a LOT of people.
Software patents, I suspect, would qualify as "Cybercrud".[ Reply to This | # ]
|
|
Authored by: Anonymous on Wednesday, November 11 2009 @ 11:16 PM EST |
This (lovely!) article also presents one of my pet concerns with software
patents. Patents are supposed to disclose the invention to others skilled in the
art. I believe that most current software patents "disclose" the
invention only to those skilled in legal language. This article supports the
idea that if software is patentable, the only proper disclosure is a
turing-equivalent code for the software being patented. In other words, the only
proper disclosure for a software patent is source code for the patented item in
some computer language. Any patent system that includes the idea of software
patents must be open source![ Reply to This | # ]
|
|
Authored by: leopardi on Wednesday, November 11 2009 @ 11:29 PM EST |
I think you need to mention that the Lambda Calculus itself
has been
patented, as
U.S. 7,146,604, Program operators for composing
abstractions , assigned to
Xerox
corporation.
Since, by your argument, every computer executes
the
essential reduction algorithm of the Lambda Calculus as
it runs, every computer
violates this patent. Also, by your
argument, the execution of any algorithm, by
any means,
including mental steps by a human, violates this patent.
From the patent:
What is claimed is:
1. A method for
composing programming abstractions,
comprising:
defining a composition
operation for composing a first
abstraction and a second abstraction with a
selected
composition operator; each abstraction having a form, a
prefix, and at
least one expression; unfolding the
expressions from the first and the second
abstractions by:
(a) removing the prefix of each abstraction, and
(b)
substituting formal parameter names in each expression with
a common
parameter name to define open variables;
transforming the unfolded expressions
to a reduced
expression using a composition pattern that tunes semantics
of the
selected composition operator; selecting a prefix
that depends on the selected
composition operator and the
form of the first abstraction and the second
abstraction;
the selected prefix having a formal parameter with a type;
and
nesting the reduced expression in a third abstraction
that composes the first
abstraction and the second
abstraction by: (i) appending the selected prefix to
the
reduced expression, (ii) binding the open variables of the
reduced
expression to the formal parameter of the selected
prefix, and (iii) computing
the type of the formal
parameter of the selected prefix.
[ Reply to This | # ]
|
|
Authored by: Anonymous on Thursday, November 12 2009 @ 12:42 AM EST |
ahhh, but with error, is what ms has a patent on, as well as the infinite
loop(sorry apple)[ Reply to This | # ]
|
|
Authored by: IMANAL_TOO on Thursday, November 12 2009 @ 01:13 AM EST |
Very Good.
Hopefully this will be forwarded to the relevant decision makers.
Thanks PoIR!
---
______
IMANAL
.[ Reply to This | # ]
|
|
Authored by: Mike_no_Softie on Thursday, November 12 2009 @ 03:03 AM EST |
Very good article; I now understand Computation Theory better. I just want
to add some thoughts which were inspired by this article and which may make the
argument clearer for lawyers.
You ask the question: Why were
mathematicians so interested in effective methods in the 1920s?
You
turn to Hilbert but I want to start somewhat earlier.
Sir Isaac Newton
introduced the mathematical technique of calculus and used it in physics to
formulate his famous laws.
Calculus as introduced by Newton et.al. lead to some
unfortunate mathematics. Just as an inexperienced programmer can botch it and
develop a program that leads to absurd results so an inexperienced mathematician
could prove absurd theorems as you state in the article.
The reason was
that Sir Isaac did some voodoo with dividing by 0. From school everyone should
know this is absolutely forbidden. The rules which were needed to do calculus
were developed in the 19th century (excluding the division by 0) and further led
to the very careful analysis in early 20th century of the foundation of
mathematics.
The calculus as developed by Sir Isaac contained a lot of
hand-waving. A computer (or a student) could not use it because there was no
algorithm. Only by knowing what the result was (by observing Nature) could you
use (the division by zero) calculus. The problem with this approach is that this
was acceptable and do-able for physicists but unacceptable for mathematicians.
So that's why as you describe early 20th century mathematicians developed
effective methods etc. and made a new subdivision in mathematics which became
known as theoretical computation science and as a by-product led to the art of
programming as we now know it.
Computation Science did not only lead to
programming but is also used in physics. For a recent successor of Sir Isaac
Newton as
Lucasian
Professor of Mathematics i.e. Stephen Hawking used
Computation Science (in the same way his predecessor Newton used calculus) to
compute how the world will develop.
See for instance
Bekenstein
-Hawking entropy. The physical 'entropy' is closely linked with the
Computation Science 'information'. One lies in physics and the other in
mathematics. Just as velocity lies in physics and the derivative lies in
mathematics.
I am wondering if a lawyer would dare say that the
research Professor Hawking has done is patentable. The argument that software is
patentable can be repeated for the work of Professor Hawking making the physics
he researched or at least the mathematics (Computation Science) he used
patentable!
Nothing witty to say ;( [ Reply to This | # ]
|
|
Authored by: halfhuman on Thursday, November 12 2009 @ 04:09 AM EST |
This is truly a magnificent achievement: short (yes!! *very* short),
beautifully argued, beautifully written.
If ever "techno-legal" becomes as
important as "medico-legal", this text should be published as a classic, on
thick creamy paper, intaglio printed, bound in vellum!
The challenge of
implementing that sort of quality in an ebook makes one think about where ebooks
and their readers should be heading.
PS: of course, philosophically, I
still have many questions, which chiefly concern the nature of mathematics and
are completely irrelevant to software patents. [ Reply to This | # ]
|
|
Authored by: Anonymous on Thursday, November 12 2009 @ 04:39 AM EST |
At different stages, software is different things.
1) At source code stage, the source code is a text file
which it is an abstract representation of an algorithm - the
algorithm on which software patents are now being issued
should not be patentable. Other things in software that have
been granted patents are specifications for protocols or
data formats. Specifications are not inventions and should
not be patentable because they are not inventions. A patent
would not be allowed on a specification for a PC which
consists of a certain combination of known components would
not be entertained even if nobody else has used that
combination before, so why should be allowed in the case of
software?
2) After compilation, the byte code that sits on a CDROM,
USB drive, floppy, hard drive, or RAM is simply data, and as
such is not patentable.
3) When executing, a computer program executes a series of
machine code instructions. These instructions are a small
closed set of standard machine code instructions which are
defined by the manufacturer of the CPU hardware and these
instructions are no more not novel or inventive than bricks
used for building houses are. Hence the only claim for
novelty at this stage is that the sequence of standard
instruction bytecodes is somehow novel. However this use is
very dubious. Computers are intended to be programmed using
different sequences of bytecodes in order achieve different
effects in the same way that bricks can be used in different
patterns and combinations in order to build different
structures - their use in different combinations is
therefore obvious because that is their entire purpose. We
do not allow patenting of different patterns of bricks - for
example "Flemish bond", or "embodiments comprising of single
brick for a variable height of between 900mm to 1.5m with
half brick of 0mm to 600mm high on top with a common flush
face on both sides", so why would analogous patterns of
bytecode be allowed to be patented in software?
4) The key things that are being abused in order to get
software patents issued are process patents and patents on
specifications. Patenting is allowed on the former for
chemical patents, and the latter for alloy patents. However
in both cases, patents are allowed on very narrow and very
specific combinations only - you cannot patent generic
methods. Simply changing one parameter in the process or
specification or using a different physical process with the
same intermediate products will bypass the patent. In
addition, the patent is invalid if the combination does not
offer a specific and definite stated advantage over other
possible parameters or combinations, or if only a finite set
of valid combinations exist. This case has to be argued
properly if software patents are to be defeated.
[ Reply to This | # ]
|
|
Authored by: Anonymous on Thursday, November 12 2009 @ 04:49 AM EST |
76448138777820464475449067462956899698871275124041733111387061633796085087011546
57127315715715183103786690838842057663448692686822028958021325692099460639866634
95261056306863173455556231230173476344112342108067535759186614428675444277530314
74656980914596916254593025978205641462663393995957503154589168258608957820290802
00401057145615541448015100347934825482813054866159589778397945755009577548211936
32790604385234415220988886139659002763791899611350824975227184396951643592164058
39044787633420886686322661391855816465577463400282766289487307905161041825606196
45559072479385491144511149301875412127208909343404243676606863345670395926693599
401154334328917104000000000000000000000
\1034287853323974609375000
[ Reply to This | # ]
|
|
Authored by: mpellatt on Thursday, November 12 2009 @ 05:42 AM EST |
Reading this article, and (a few of) the copious references, especially the
early history of computing and the incredible work of Turing, brings about a
true sense of humility and reminds me how today's tech is truly built by those
who stand on the shoulders of giants.
Not-so-hidden agenda: Software Patents kill innovation.[ Reply to This | # ]
|
|
Authored by: Ian Al on Thursday, November 12 2009 @ 06:11 AM EST |
I think the Supremes want you to be able to.
In one of the links that PJ
provided, the ex of one of the Supremes referred to the Bilski certiorari
petition citing the Supreme Court's precedent declining to limit the broad
statutory grant of patent eligibility for "any" new and useful process beyond
excluding patents for "laws of nature, physical phenomena, and abstract ideas."
The ex suggests that the Supremes are likely to reverse the clear, bright line
test of machine or transformation. Therefore the
argument you put needs to put
software into the exclusion zone defined by the Supremes previous guidance
rather than the District test from Bilski.
The maths stuff expands on what
happened when modern computation theory began with research on effective
methods, which are procedures to perform a computation that is entirely defined
by their rules. It seems to me that this is making the point that software to
achieve a computation is maths.
Any computation in pure maths must be
unpatentable subject matter even if you use a slide rule, computer or pencil and
paper to do it because it is dealing with abstract ideas. However, in Diamond v.
Diehr the Supremes make the distinction between pure maths and applied
maths.
Now the conclusions,
All software is data.
All software is
discovered and not invented.
All software is abstract.
All software is
mathematics.
may still be true, but may not stand in the way of
patentability because they may only be true for software that achieves a
computation in isolation. In other words, when the software produces an abstract
answer to an abstract question.
You referred to Diamond v. Diehr where the
respondents did not seek to patent a mathematical formula.
Instead,
they seek to patent protection for a process of curing synthetic rubber. Their
process admittedly employs a well-known mathematical equation, but they do not
seek to preempt the use of that equation. Rather, they seek only to foreclose
from others the use of that equation in conjunction with all of the other steps
in their claimed process.
So the mathematics is not patentable,
but the application of the mathematics to create a useful invention has that
potential.
You also referred to Parker v. Flook where the process is
unpatentable not because it contains a mathematical algorithm as one component,
but because once that
algorithm is assumed to be within the prior art, the
application, considered as a whole, contains no patentable invention.
Lets
suppose we find a piece of software that is not a well-known mathematical
equation but which, in conjunction with the other steps in the process, is
useful, novel and patentable. From Parker v. Flook, the process is now
(potentially) patentable because it contains a mathematical algorithm that is
not within the prior art and the application, considered as a whole, now
contains a patentable invention. Because the maths... sorry, software is no
longer an abstract concept, but is applied as a useful art it fails to fall
within "laws of nature, physical
phenomena, and abstract ideas."
We know
that the rest of the computer, its peripherals and its connected networks are,
in effect, prior art and the only thing left is the applied maths that we wish
to foreclose from others using with the same other process steps (aka, computer,
peripherals and networks).
So, with the danger of my sounding like a worn
record (Pat. Pending) we can do the inverse of Parker v. Flook and disclose
where in the software there is a something that meets the prior art and
innovation criteria. The syntax of the software is prior art. Even large
sections of the syntactic material will be indistinguishable from any other
large section. Anything patentable is higher up in the applied maths. What is
more, the patentable stuff has to be identified even when expressed in an
alternative syntax, otherwise it is a matter for copyright. The innovative
process formed in the software is what has to be identified for the patent
disclosure. As I might have said before, the challenge of disclosing the chunks
of source code that exclude prior art, but include the invention is the real
reason why it should not be possible to get a patent grant
for software.
The
ENIAC computer applied maths to making atomic bombs. Could this be patentable
subject matter? I think this is where the Supremes don't want a clear, bright
line because they want to leave patentable subject matter as broad as possible
with any grey areas resolved in the courts. (I think this is appallingly
dangerous in the area of software, but the amicus briefs make my point for me).
I think that the ENIAC calculations when applied to making atomic bombs were no
different to using slide rules (well, perhaps faster!) and there was no direct
involvement as a component of an invention.
Matching Diamond v. Diehr to
lossy compression of sound data, we might put words into the mp3 inventor's
mouth and propose 'Our process admittedly employs a well-known mathematical
equation from two centuries ago, but we do not seek to preempt the use of that
equation. Rather, we
seek only to foreclose from others the use of that equation
in conjunction with the compression of audio data files'. So, which side of the
Supremes broad and fuzzy gray line does that fit? I suggest that it is on the
patentable subject matter side of the line, but only just. If the invention
started and stopped at the data files then it would be outside. However, the
invention should be considered as starting with the generation of the original
sounds and ending with the decoding and playing back of the files as audio. The
preclusion of delays in a process or the process being between more than one
communicating or interconnected machines would preclude audio recording and
radio transmission of sound from being patentable subject matter. The alarming
thing about the mp3 patent is that there are no newly invented components and
therefore nothing in the software that needs to be disclosed in detail. Any
machine that uses the mp3 maths to encode audio into a computer file of any
format or file system would be covered.
Lets play with Diehr a little more
by considering other inventions with no new components. A word processor is
potentially patentable subject matter. It takes external data from the keyboard,
gives the operator feedback to aid editing (but, I won't reveal how I know
that!) and, with the possible time delay of saving the file and reloading it,
prints a hard copy. The file standard is a matter of international
interoperability and should not be patentable, but, that apart, the rest might
be standard programming and standard mechanical and electrical components as far
as the word processing invention is concerned. The Supremes attitude is likely
to mean that, were it not for prior art, the word processor could receive a
patent for what it does. It is an inventive application of standard components
and maths (the software). Now, I would not be able to use a web-based word
processor to contribute to my favorite blog without a paid-for licence. Since
this implies that a new use for a general purpose computer could be patented,
then the Supremes have got the standard wrong.
As a general rule for
patents, I continue to believe that any invention that is nationally or
internationally used as a standard for interoperability of machines or systems
should not be patent subject matter. This should be a new rule at the patent
offices in addition to inventiveness and prior art. Any patent request that
proposes a 'system' is making a call for patenting interoperability and should
be rejected. That would mean that Microsoft's API's and file systems would not
be patentable. However, their method of implementation (the detailed contents of
the code) might contain a protectable invention.
One final experiment with
Diehr, lets consider a patent on sudo whereby a requester pops up whenever one
asks for a computer operation for which permission is not given. There is no
input data. The requests for operation from the user are not an integral part of
the invention. The trigger for the invention to run is purely software/maths.
There is no output to a machine or peripheral because the text or requester
display is not used apart from the invention. In other words it is pure maths
running in the computer and not applied maths with a tangible result. The
results cannot be directly observed. They can only be deduced from the
operations of the software and are therefore abstract in nature. The invention,
itself, no matter how it is constructed, is not patent subject matter. If the
implementation of sudo (i.e. the maths/software) contains a significant
invention then this invention might be patentable, but only when that maths is
applied to patentable subject matter. However, as I point out earlier in this
response, it is not possible to form a patent request that discloses the
invention in maths/software that forms a component in a patentable subject
matter invention and tops the hurdles documented for patent award (words
carefully chosen). Also, if it is not part of a patentable invention then, on
its own, it is pure maths and is not patentable subject
matter. --- Regards
Ian Al
Linux: Viri can't hear you in free space. [ Reply to This | # ]
|
|
Authored by: kh on Thursday, November 12 2009 @ 06:44 AM EST |
When I was young I thought it would be impossible to imagine anyone copyrighting
a number. Now it happens all the time.
A CD, an album if you like, is a number. A song, a number. There is no doubt
these days that it is possible to copyright these numbers.
A book, digitised is a number too. In many ways it always was. The ways of
putting together a finite series of symbols (letters) are technically in
mathematics 'discoveries' but no-one can really argue that writing book is
anything like discovering a number.
For those reasons, I'm not sure I find the arguments all that convincing.[ Reply to This | # ]
|
|
Authored by: network_noadle on Thursday, November 12 2009 @ 07:06 AM EST |
I know it will mean some effort, but I think it would be worth it to review
this article and the comments, and then either update it, or republish it as an
updated article, incorporating the additions. I'd like to offer to help with
that, but I don't know what help I could offer.
I have been in
programming and IT for the last 20 years, and I found this discussion of my
chosen field fascinating. [ Reply to This | # ]
|
|
Authored by: Anonymous on Thursday, November 12 2009 @ 07:14 AM EST |
This was a brilliant paper. The implications of interpreted languages has not
been sufficiently appreciated, I believe. It is a simple way to trump any
argument based on compiled code. If a program or business method is patentable
due in part to its manifestation as compiled code; then what about the same
program or method manifested in an interpreted language? Infringing or not?
Thanks so much
td[ Reply to This | # ]
|
|
Authored by: Anonymous on Thursday, November 12 2009 @ 07:54 AM EST |
I only wish this could have been put into one of the amicus briefs the Supreme
Court read! But it's too late for that now, unless someone has the means to
bring it to their judicial notice (we don't have any Supreme Court law clerks
here... do we?)
I am a mathematician, and this man is 100% correct in terms of mathematics and
computer science. I only wish the justices could read this...[ Reply to This | # ]
|
|
Authored by: Anonymous on Thursday, November 12 2009 @ 08:14 AM EST |
I can adapt a pick-up truck to do a particular job by adding a component or
system of components.
For example, I can add a trailer hitch to adapt the truck to pull a load. If
the trailer hitch has a new and useful feature, like a particular safety
mechanism or quick release, I can patent the modification to the truck to do
this load pulling job.
I can add a winch and a set to tie downs to help haul my motorcycle into the bed
and secure it there for transport. If the winch or tie downs have new and
useful features, I can get a patent for the modification to the truck to do this
winching/hauling job.
These modifications to the truck make a new machine. I can get more for the
truck should I choose to sell it because it has more uses than when I first got
it
If you copy my inventions, one or both at the same time, you are infringing my
patents. You would likely be charged with a per instance or per installation
count of infringement. Not a new count for every micro second the devices
existed.
I can adapt my computer to do a job by adding software. I can add a word
processor to help me type documents and/or I can add a spread sheet program to
help me keep track of all the money I make by suing other software developers.
I can get a patent on the modification to my computer to do these jobs.
Capice?[ Reply to This | # ]
|
|
Authored by: GuyllFyre on Thursday, November 12 2009 @ 08:40 AM EST |
"They actually do want to get it right, you know."
I have severe doubts about this. Perhaps some of the judges would like to
"get it right" but I highly doubt the lawyers want this to be
clearer.
If there is only hard fact to work with, the trolls and other lawyers who make a
living off the ambiguities would be out of jobs. They are most likely NOT
willing to get it right so that they can keep arguing in circles and playing up
their little dog and pony shows to keep robbing others of their hard earned
money.
Lawyers thrive on ambiguity because it gives them a chance to make a living at
the expense of the rest of us.
I fear while this is a great explanation, patent lawyer troll persons like Gene
Quinn will look to argue these facts and keep them away from everyone. They
will find some nit to pick and make a big fuss about it and nothing will ever
get better.[ Reply to This | # ]
|
|
Authored by: Anonymous on Thursday, November 12 2009 @ 09:42 AM EST |
Sure as
PRINT "HELLO WORLD"
it does not look like the calculation example. But think of the translation and
symbolism operations.
It could also be
Screen[0][0] = 'H'
Screen[0][1] = 'E'
Screen[0][2] = 'L'
Screen[0][3] = 'L'
....
Where screen[row][col] represents a reference to the memory location
corresponding to the display.
Or translating the display address and the character data
0x80000 = 72
0x80001 = 69
0x80002 = 76
0x80003 = 76
....
What it would actually look like is (//// comment)
//// store the string in memory
0x10000 72, 69, 76, 76, 76 ....
//// reference the start of the string
LD X, #0x10000
CALL $PRINT
//// The $PRINT function will copy the string at the reference location to the
output[ Reply to This | # ]
|
|
Authored by: Imaginos1892 on Thursday, November 12 2009 @ 12:07 PM EST |
Thought I'd add a few ruminations about computers, from the intentional
aspect
A digital computer is only capable of performing a small number
of simple binary-logic operations.
These operations - the machine
instructions - are a set of state transition rules permanently
encoded into
the computer's hardware. Everything the computer does or can do is accomplished
by
arranging, sequencing and repeating those simple operations. Every
computer program consists of
an ordered list of operations selected from
that set of rules. And in case anybody had any doubt,
binary logic is
math.
Every operation possible to any digital computer can be reduced
to Exclusive-OR.
Any digital computer can be modeled as a large, but
finite, array of binary bits. Each bit has a
value of 0 or 1, and a
state consists of the values of all the bits at one instant in time. In
any
transition from one state to another, each bit can either change, or not
change. Therefore, every
possible operation can be exactly represented by an
Exclusive-OR function with a sufficiently large
operand in which bits that
change are set to 1, and bits that do not change are set to 0.
Every
computer program consists of a sequence of operations which can be performed by
a human.
Every program is eventually reduced to a sequence of the
simple binary-logic operations that the
computer can perform. A human can
mentally perform each of those operations, else no human could
have designed
the CPU that does perform them. In addition, the programmer had to perform
some
equivalent of those steps mentally in order to write the
program.
A digital computer is fully deterministic.
Once a
computer has been set to some initial state, all subsequent states can be
predicted by
mindlessly applying the state transition rules encoded into its
hardware, with the exception of
asynchronous events such as user input. Even
then, given full knowledge of the timing and content
of those events, the
results can be unambiguously predicted by application of the same rules.
So,
programming the computer consists solely of establishing the desired
initial state and allowing
it to progress through all of the inevitable
successor states. You could say that the whole
computer program and
everything it will ever do is expressed in that initial state.
Every
possible state has already been anticipated by the design of the hardware. The
bits are there.
All a programmer or user can ever do is set them to 1's and
0's.
-----------------------
Nobody expects the Spanish Inquisition!!
[ Reply to This | # ]
|
|
Authored by: Daddl on Thursday, November 12 2009 @ 12:30 PM EST |
I already posted this in the draft thread, but as this is where this is actually
discussed, I c&p it here:
What I always find so hard to understand, is why people need so complicated
explanations, mathematical theories, etc. to argue why software can't be
patented.
Basically a computer is a machine that a lot of switches with defined states
('on' and 'off' for binary logic circuits) - like a traditional machine with
levers or valves and such. Only it has many more 'valves'. What software is, is
a defined state for some of the valves (open or closed, on or off, 0 or 1). A
computer is constructed to act upon those 'valve settings', like a steam machine
acts upon the opening or closing of valves. Can I patent a computer (or it's
hardware parts)? Yes, of course - just like any other machine.
I can patent the 'valves' in a computer (and the technology to produce them,
they way they are arranged and connected), the hardware used to output 'valve
states' to humans (e.g. lcd or crt screen technology, light bulbs, speakers),
input technologies like keyboards or mice, the technology used to store 'valve
states' (ram, flash memory, harddiscs, holographic memory...), etc.
Can I patent the actual way I use that patented technology? No. If I used my
patented cork screw to remove dirt from under my fingernails after working in
the garden - could I patent this? No. If I used my notebook as a doorstopper,
could I patent this? No. Would I infringe anothers patent doing so? No. So why
should I be able to patent a specific use of a machine, just because I in
exactly the way it was designed (and already patented) for? Any computer is
build to act in a predictable manner on the state of these 'valves', it's the
basic function of a computer - so I'm simply using it inside its original
specification.
Whatever code I write and load onto a computer (i.e. whatever states I set the
'valves' to), the computer acts just the way it is constructed - it does not
become a different machine. My watch (mechanical by the way) doesn't become a
'new patentable machine' just because I set it to a different time. Nor can I
patent the specific time I set it to.
Software (as in whatever code we write in Assembly, C/C++, C#, Java, or other
programming languages) is just a convenient way to describe the specific state
of some of the 'levers' in a computer. Can I patent a description? No. Can I
copyright it? Yes.
I don't see any relevant difference between a computer and any other machine or
technological device ever devised. Be it the revolutionary 'sharpened rock on a
stick' mammoth hunting device, a steam machine, a laser gun or whatever. Any
distinction made is purely artificial.[ Reply to This | # ]
|
|
Authored by: Anonymous on Thursday, November 12 2009 @ 01:02 PM EST |
Sorry, I couldn't get past the first few paragraphs.
"You can't invent abstract ideas and mathematics. You can
only discover them."
This is completely a matter of opinion. This is an open
question in the philosophy of mathematics. This particular
viewpoint is an tenet of the school referred to as
"Mathematical Realism".
My opinion, not that it matters, is that maths are invented
all the time. If you follow the computing discipline of
formal verification, new logics are created, extended, and
scrapped based on how useful they are for proving the
correctness of programs. Separation Logic, for an example
is turning out to be a useful extension of C. A. R. Hoare's
logic.
You may be tempted to suggest that they are just...
"taking mathematic constructs out of the ether", and you
might further argue that there are an infinite amount of
such constructs.
But, my friend, what kind of infinity? Countable? Aleph
One? That slippery infinity in between infinities that
nobody has invented/discovered yet?
See my point?[ Reply to This | # ]
|
|
Authored by: Anonymous on Thursday, November 12 2009 @ 01:32 PM EST |
This is a very nice exposition of the mathematics. I have a Ph.D. in mathematics
and have many times had to explain to people how mathematics is about discovery
not invention; it is nice to see so many people understanding that.
However,
I am not convinced this is the best argument against software patents, for the
following reasons:
Is "software is maths" really true as
stated?
I'm not sure that it is. It depends on the definition of
software. If one adopts a definition that probably many non-mathematicians and
non-computer-scientists do, that a piece of software is a description, written
in a formal programming language, of a series of steps for a computer to
perform, then:
- That piece of software on its face is not
mathematics.
- Rather, that piece of software is equivalent to a
mathematical function.
The distinction may seem pedantic but I think
it is important. I think the legal types like Quinn are thinking about software
in a very concrete, "what is this on its face" kind of way, while the
mathematical types are looking beneath the surface to an underlying mathematical
concept.
The trouble with using this as an argument against software patents
is that if you are willing to look at underlying abstract mathematical concepts
you can do the same even for traditional nuts and bolts inventions built with
materials like wood, steel, nails, and screws. The steps needed to put such an
invention together can be mathematically formalized, with invalid steps
mathematically distinguished from valid ones, in a way very similar to the way
in which essential methods or algorithms have been mathematically formalized. To
demonstrate this, let me describe the beginning of such a
formalization
(non-mathematicians may want to skip over the next few
paragraphs):
Let M be a finite set called the set of material
types, to whose elements we give names such as "wood" and "steel". Let
M' be a subset of M whose elements we will call hard
metals.
Define an object to be an ordered pair
(R,m) where R (called the object's region) is a
connnected subset of R3 (3-dimensional space) and m
is
in M. (An object is a region of space filled with a particular type of
material).
Define a nail with sharpness s to be an
object
(R,m) where m is in M' (a hard metal) and
R has a designated element p called
the point, a
designated subset H called
the head with centre point c
in H, and every other point q in R has the property that
the vector from p to q makes an angle smaller than s with
the vector from p to c.
Define a screw to be (similar
but more elaborate definition here).
Define an assemblage to be an
ordered pair
(A, G) where A is a collection of objects
whose regions have disjoint interiors and G is a subset of
R3 such that every point in G belongs to the boundary
of the regions at least two different objects
in A. (A collection of
objects with gluing or other attachment on some of the joining
surfaces).
Now one can mathematically categorize the set of assemblages that
can be built
out a starting assemblage of raw materials. This can be done with a
recursive
definition similar to that for computable functions. First one defines
basic
building blocks such as moving, gluing,
nailing,
and screwing. Then one says that assemblage
(A',G')
can be built out of (A,G) if
either
- (A',G') is obtained from (A,G) by
moving or
gluing or nailing or screwing, or
- there is an assemblage
(A'',G'') that can be built out of
(A,G) and
(A',G') is obtained from
(A'',G'') by moving or
gluing or nailing or screwing.
We can say that (A',G')
is obtained from (A,G) by
moving if there exists an
affine transformation T
of R3 and a subset B of
A with the following
properties. Here we let C be the union of the
regions of the objects
in B and D be the union of the regions of
the objects in A
that are not in B.
- T(C) is
interior-disjoint from D (we can't move one
object into the space
occupied by another object we're not moving)
- G, C, and
D have no point in common (if a region
we're moving has a boundary
surface that overlaps one we're not moving, the
overlap can't be an attachment
point.)
- A' is the union of (T(R),m) over
objects
(R,m) in B together with the union of
(R,m)
over objects in A-B.
- G' is the
union of T(G intersect C) with
G intersect
D.
We can say that (A',G') is obtained from
(A,G) by
gluing if A' = A and G is a
subset of G'
and (A',G') meets the definition of a valid
assemblage.
We can say that (A',G') is obtained from
(A,G) by
nailing if there is a nail
N=(R,m) (with
head H and shaft
S=R-H) in
A, with R disjoint from G,
and a linear transformation
T of R3 such that
T(H) is
interior-disjoint from the regions of all other objects in
A (I am
disallowing countersinking here for simplicity; a countersunk
nail obviously
requires a more elaborate definition), such
that:
- A' is (T(R),m) together with the
union
of (R' - interior(T(R),m') over all
(R',m') in A-{N} (the nail displaces material
from
the nailed objects in the location where the nail now is), and
- G' is
the union of G with the union over
all (R',m') in
A-{N} of R' intersect
T(R) (the places where
the nail shaft touches the material it
was driven into are now attachment points
that can't be pulled apart)
We can say that (A',G') is
obtained from (A,G) by
screwing if (similar elaborate
definition here).
(Non-mathematicians can return to the discussion here.)
The point is that questions about what configurations of objects can and cannot
be built mechanically could, at least in principle, be formulated as a
purely mathematical question, in much the same way as questions about what can
and cannot be computed. A tangible invention out of wood and nails is simply a
practical implementation of a series of building steps which are in turn
mathematically equivalent to a choice of a particular function between
abstractly defined sets. This is similar to the way in which a computer running
a program is simply a practical implementation of a piece of software that is in
turn equivalent to a purely mathematical abstract concept.
So, yes, software
"is" mathematics with the right definitions, but so too are the means for
building physical and mechanical inventions. I don't think the "software is
mathematics, mathematics is not patentable, therefore software should not be
patentable either" argument will convince the pro-software-patent types that a
computer running a particular software program is not patentable. At
best it will convince them that the software program in the abstract, written
down on paper but not running anywhere, is not patentable, or that the
mathematical abstractions to which the software is equivalent are not by
themselves patentable -- but that's not the same thing. They will think of the
software in the abstract as being like the artificial formalism I gave above of
mathematically modelling mechanical steps for building out of wood and metal:
not patentable by itself, but as soon as you actually run it on a computer, you
could be infringing a patent, much in the same way that as soon as you actually
build a device out of wood and metal using that formalism, it becomes an
infringement if the design was patented.
I think that there is a much better
chance of convincing the pro-software-patents legal folks that software is
distinct from other types of "inventions" if we focus on the fact (which was
also explained in the article) that computers are, essentially, universal
Turing machines and that there is no fundamental difference between the
notion of "a computer running a particular program" and "a computer running the
same program it was designed and built with but being given particular input
data". That, in my opinion, is where the difference between mechanical
inventions and computer software lies. There is currently no "universal
invention-building robot" which, when given a precise description of how to make
something, will go ahead and make it. Building a mechanical device requires
either human action or a specially-constructed, purpose-built robot, and it also
requires permanent commitment of materials and physical resources. However, to
run software on a computer, nothing is required except to store the program on
the computer in the same way any other kind of data is stored. I think it is
this fact that makes the difference between "a software program in the abstract"
and "a computer running a software program" fundamentally and qualitatively less
of a difference than the difference between "a mechanical invention
specification in the abstract" and "a physical device built from that
specification". It is not the fact that the software program in the
abstract is equivalent to a mathematical concept that makes the difference,
because the mechanical invention specification could also be represented by a
mathematical concept. Rather, the difference is that to move from the
equivalent-to-mathematics mechanical invention specification to an actual device
requires specific physical action, whereas to move from the
equivalent-to-mathematics abstract software design to an actual computer running
that software requires nothing more than operating a computer in the way it was
originally designed but feeding it specific input data.
In other words,
should merely exposing an existing device to information qualify as creating a
patentable new device? I think one can make a very convincing argument that the
answer to that question is a resounding "no". And for that reason (not the fact
that the information provided to it is equivalent to mathematics), software
should not be patentable. The mathematics is a bit of a red herring here; the
key is simply that, because information by itself is not patentable (regardless
of whether it is mathematical information or not), merely exposing a device to
information and letting it process that information as it was designed to do
(even if the designer didn't know what future information it was going to be
exposed to) is not enough to create a new device that is subject to patent
protection. [ Reply to This | # ]
|
- A nice exposition of the math(s), but is it convincing against software patents? - Authored by: Anonymous on Thursday, November 12 2009 @ 01:38 PM EST
- A nice exposition of the math(s), but is it convincing against software patents? - Authored by: PolR on Thursday, November 12 2009 @ 02:49 PM EST
- A nice exposition of the math(s), but is it convincing against software patents? - Authored by: Wol on Thursday, November 12 2009 @ 06:40 PM EST
- Ah, but your analogy gives the answer. - Authored by: Anonymous on Thursday, November 12 2009 @ 08:03 PM EST
- You are absolutely right. - Authored by: Anonymous on Thursday, November 12 2009 @ 08:07 PM EST
- (program == data) < == > Church Turing Thesis == TRUE - Authored by: Anonymous on Thursday, November 12 2009 @ 11:15 PM EST
- If it were so, it would be. But as it isn't, it ain't. - Authored by: Anonymous on Friday, November 13 2009 @ 02:43 AM EST
|
Authored by: vortex on Thursday, November 12 2009 @ 02:38 PM EST |
I should begin by pointing out that I, too, have spent a quarter century
working in the software industry – and do not believe that software should
be patentable. However, I find the "because it's mathematics" argument weak.
Fundamentally, the correct reason why it shouldn't be patentable is that patents
are an abysmally poor fit for this industry.
However, as to the
present discussion, I must point out that what computers do is not all
mathematics. In so far as what my computer does is purely computation –
and it does a great deal of that – it is, indeed, operating as a Turing
machine. However, it is also ignoring my mouse and my keyboard.
To
properly model a modern computer and take account of user input it is
necessary to model it as a Turing oracle machine – as I type
this, the computer isn't working out what characters to display on the screen,
it's consulting some hardware that reports what I'm typing. The program then
follows purely mathematical rules in deciding what to do with what I'm typing,
namely to display that text and add it to the data in a field of a form that
I'll later submit. While vast amounts of what a computer does are
deterministically controlled by the programs the computer is running, diverse
things aren't: aside from what I type and how I move the mouse, the computer
also encounters such non-determinacy when it attempts to interact with remote
computers.
To submit this form, this computer shall do lots
of purely Turing activity: but when the program asks Linux to connect a
socket to a remote computer, the program has absolutely no way of knowing when
it'll get its response. As it happens, I am the author of the code
that handles that indeterminacy, so I know its details horribly well. The
browser sends its request and gets on with something else. Sporadically,
thereafter, it checks to see whether it's got an answer. Typically, it'll get
an answer eventually, and get back on with being nice and Turing about it: but
every time it stopped to ask for that answer, it was consulting an oracle, in
the terms of the article. Once the connection is made, this computer shall send
some message and wait for an answer: again, the external network functions as an
oracle.
Likewise, in fact, the way the computer gets keyboard and
mouse input involves hardware that works in parallel with the CPU to notice such
input: when it does, it generates an interrupt that causes the program
to deal with that input. Interrupts are expressly not part of the
Turing behaviour of computers: they are not determined by the
algorithms the computer is running; every interrupt is a manifestation of some
sort of oracle, in Turing's terms. We have designed computers so that these
oracular interventions disrupt the model as little as possible (we register nice
little Turing fragments of code to handle them, and we take care to write those
in such a way that the intrinsic indeterminacy of how and when they get called
isn't a problem): but they still happen, so the modern computer is, in fact, a
Turing oracle machine – at least whenever it is being
interactive.
Consequently, an argument based solely on "software is
mathematics" is both false (it presumes software is pure-Turing, not
Turing+oracle) and inadequate: those who want software patents shall walk round
it by exploiting the oracular character of interrupts. Indeed, while some
patents have been laughably blatant in laying claim to pure Turing activity,
many have taken the trouble to couch their claims in terms of the
user's input and network activity – both of which are emphatically on the
oracular side of Turing's analysis. As PoIR says, that's the side where patents
are legitimate: but, where the non-Turing activity arises purely from interrupts
driven by entirely conventional hardware, we need the law to grok that the
involvement of this not-just-mathematics should not serve as carte
blanche for patent land-grabs. Only when a computer is entangled with some
hardware that does something novel, albeit assisted by the computer and its
programs, should the novel thing the hardware is doing be patentable: and, even
then, the software that facilitates it in doing this should not be
subject to patenting, any more than mathematics is, even despite any degree to
which the software's involvement with oracles takes it outside the pure-Turing
domain.
We should not be using "it's all mathematics" as if it were
our strongest argument: first, because it isn't all mathematics, and
the oracular parts really do matter; second, because there is a far
stronger argument, at least where it comes to persuading legislators and judges.
That argument is that the idiom of patent protection is unworkable in the case
of software, for diverse reasons that RMS and others have expressed far more
eloquently than I can. It is unworkable in court because it produces too much
unpredictability (where judges think law should produce predictability) and it
is unworkable in the economy because it cripples those who are actually doing
something that adds value (and value is the foundation of all the wealth that
governments tax; the less of it there is, the less they have to spend). Even
when software isn't only mathematics, it's still far better for freedom, the
economy and justice if the activity of producing it and using it is as
unfettered as possible.
--- Finally – the SCO group saga's end
game.
Break out the pop-corn, sit back and watch the fire-works.
[ Reply to This | # ]
|
|
Authored by: rhdunn on Thursday, November 12 2009 @ 04:28 PM EST |
The section "Programming Directly from Mathematical Languages"
reminded me about learning Z Schema [http://formalmethods.wikia.com/wiki/Z].
Each function in Z Schema is a transformation from the inputs to the output(s)
using mathematics to express the transformation.
There are a set of pre-conditions that are expected to be true for the inputs of
the function (the axioms, if you will). You also specify post-conditions (things
expected to be true when the program is finished) and invariants (things you
expect not to have changed in the system).
With this, you prove that the post-conditions are met and the invariants hold
after the program is completed using the pre-conditions and set theory. If these
hold, you can then transform it into a computer program.
Applications of this are in areas where correctness of the program is critical
(e.g. in a nuclear power plant).[ Reply to This | # ]
|
|
Authored by: Anonymous on Thursday, November 12 2009 @ 04:33 PM EST |
First, kudos for all that work! We could all do with the revision :-) I'm
offering the following as constructive criticism...
Here we are
looking at a two-way street. It is correct that we can describe just about
everything mathematically...
The notion that, if software is
unpatentable because it "is maths", then a steam engine must also be
unpatentable because it can be completely described by maths, is a very
dangerous one, FUD or not. The idea of accidentally pulling the plug on the
whole patent system will worry judges. The "two way street" argument applies to
you, too: you don't just have to show that software is maths, you have to show
why everything else isn't maths.
(also bear in mind that anybody who's
eyes glaze over at the maths might skip to this bit as it really is The Big
Question).
You rightly rebutt Quinn - but he has blown it by showing that he
doesn't understand what "algorithms" and "equations" are so he's a bit of a soft
target. "We don't patent maths textbooks" is also bit of a straw man.
I
think the argument that would convince me (the language could be tightened up up
a bit) is:
Computers work, fundamentally, by transforming sets of symbols
(data) according to a set of instructions (algorithms/software - which are also
symbols) in a way that is defined, with full mathematical rigor by the abstract
mathematics of computing theory.
Other, patentable, machines work by (insert
the standard wording about forces of nature and transformation of matter). They
do not work by "transforming symbols" and the mathematical describing them can
eventually be traced back to empirical observations about the real world.
Physics has yet to perfect the mathematical "theory of everything" which
completely describes the known universe to the extent that mathematics describes
the operation of computing devices.
Note that the box of tricks we call a
computer will contain "input/output" devices (keyboards, video displays) which
turn forces and energy to/from sets of symbols. These parts may well be
patentable, as may any portion of an invention which describes a new such
machine. However, the only possible interaction between the software and
the computer is to cause the manipuation of symbols - which is
mathematics.
(Note - the other challenge is to avoid the dreaded "software
per se" argument... but I think that is an inherent danger in the whole
"computing theory" argument).
[ Reply to This | # ]
|
|
Authored by: Anonymous on Thursday, November 12 2009 @ 05:17 PM EST |
Perhaps I don't understand it properly.
"All software is discovered and not invented."
I discover a vaccine for H1N1 by testing materials and analyzing the results.
Now perhaps the meaning of a = b+c was discovered, but when I use a = b+c, in a
program, I'm not discovering what it does, I'm using a previously discovered
thing to construct something new. Further when say a = b+c, d = e+f, g = a +d I
am constructing the same thing I am constructing when I say d = e +f, a = c + b,
g = d +a but the expression of the thing I am constructing is different.
So the question is, how do I protect the effort and expense I go through to
create my new thing? I clearly can not use copyright, because copyright
protects only expression, and it is not possible to protect every possible
expression of creating object g.
But to be honest, it is also impractical every time I create a thing g or h or
i, to do a patent search to see if someone else might have patented that thing.
But this objection is outside the bounds of "Software is
Mathematics".
And while I understand that software is based on mathematical principles, the
pattern of bits stored on a medium to control a machine are a physical
implementation of a specific concept.
If I read a book that says do step a then step b then step c, then I carry out
those steps, are they being carried out in exactly the same way as Sally two
cubicles over carries out the steps? If someone says step a is to bath an
object in a chemical bath for five minutes, and operator a knows that is just
enough time to go outside and smoke a cigarette, but operator b stands by the
machine and carefully times the period with a stop watch, you would think
operator b would get the better results. But operator a has been doing it that
way for years, and knows exactly that it is the length of time to smoke his
particular brand of cigarettes, and his product always comes out right and
operator b gets a 50 percent yield. This is the art necessary for the invention
to work. But if I am using a pattern of bits on a medium, with a machine with a
known clock speed and data transfer speed, there is no variance possible as
there is with a person reading a book. This, in my personal opinion is where
you have a specific implementation of the concept, and the machine loaded with
running software is no longer an abstract thing, but a concrete thing.
Now quite frankly, all this is arguing to allow a general purpose computer, or
group of computers with instructions to direct specific behavior to be a thing
that can be patented. And I am against the way software patents are granted
today. So I am painting myself into a corner. But I am going to say that a
program that is designed to run on an Intel processor is not the same thing as a
program that does the same thing on a Power PC processor, or a network or a cell
phone or a virtual machine. I am going to argue each is a separate thing that
would need to be protected by it's own patent. Now if I have something that can
read and write fat32 files on a machine driven by an 8080 processor, the thing
that does the same thing using a Pentium Processor is a different thing and
would need to be protected by a different patent. And if it wrote those files
to an IDE drive,it would need to have a different patent to write the files to a
SCSI drive, and a different patent to write it to a USB drive. You can't patent
the world with a concept. It has to be a specific thing. Before anyone says
this is impractical, go back to the example of writing code where you have to do
a patent search every time you write a function. Besides the patent lawyers
ought to have a ball with this idea. You want to patent sudo? Fine document
with specificity the invention. You document the instruction set used on the
machine where the invention applies,the environment and so on. And when you
finally get around to porting it to the ARM processor, your patent lawyers
pocket another fat paycheck.[ Reply to This | # ]
|
|
Authored by: rhdunn on Thursday, November 12 2009 @ 05:20 PM EST |
John Conway developed the Game of Life
[http://en.wikipedia.org/wiki/Conway%27s_Game_of_Life], a cellular automata
based on a simple set of rules.
The wikipedia article covers it well, and gives examples of possible patterns
and types.
What is interesting is that this is turing complete and can even be used to
create computers that are capable of running software
[http://www.calcyman.co.uk/life/universal-CC/index.htm], albeit very slowly![ Reply to This | # ]
|
|
Authored by: Anonymous on Thursday, November 12 2009 @ 05:34 PM EST |
Consider an analogy. What mechanical things can be built from 26 grams of iron?
(This is a cube just under 2cm on a side.)
Each of those things can be described by a string of 6.023 x 10 ** 23 bits.
How? Well, in the cube, there are that many atoms. So, for each possible
location, either there is an atom there, or there isn't. So, if the bit is '1',
put an atom there; if the bit is '0', leave it out.
(Want something bigger? Add more bits. Want something with more different
elements? Add more bits.)
What you've argued is that a program can be described by a string of bits, or by
a Turing machine, or by ...
Saying that a constructed thing can be described by a string of bits, like those
bits of iron, or like a program, says nothing at all about whether it can be
constructed, or does anything meaningful if it is constructed, or does precisely
what you want it to if it is constructed.
To construct pieces of iron, or programs, or ... requires engineering. Because
we won't live long enough to look at all the possible pieces or programs.
I think you need to go back to the drawing board on this argument. While I
dislike software patents, I find this argument as to their unpatentability
completely unpersuasive. It confounds construction with description, and so
fails.
Pat McGee
[ Reply to This | # ]
|
- confusing atomic number with mass - Authored by: globularity on Thursday, November 12 2009 @ 05:47 PM EST
- I dislike software patents, but I found this argument completely unpersuasive - Authored by: PolR on Thursday, November 12 2009 @ 05:54 PM EST
- You are confusing a description of a patented thing with the thing itself - Authored by: jesse on Thursday, November 12 2009 @ 06:19 PM EST
- Don't make a mountain out of a Mole - Authored by: SpaceLifeForm on Friday, November 13 2009 @ 01:31 AM EST
- I dislike software patents, but I found this argument completely unpersuasive - Authored by: Anonymous on Friday, November 13 2009 @ 05:37 AM EST
- I dislike software patents, but I found this argument completely unpersuasive - Authored by: Ian Al on Friday, November 13 2009 @ 05:54 AM EST
- I dislike software patents, but I found this argument completely unpersuasive - Authored by: Vic on Friday, November 13 2009 @ 10:30 AM EST
|
Authored by: Anonymous on Thursday, November 12 2009 @ 06:17 PM EST |
How about this much simpler form:
1. All software capable of being written by humans in finite time, is capable of
being expressed as a finite sequence of binary digits.
1a. The number of states on a machine with n bit registers, when expressed as a
numeral, is no more than 2^n. In other words an n-bit state register can attain
exactly 2^n different possible states. Therefore there can also be no more than
2^n different software instructions that can fit into that register. Similarly,
for any file of n binary bits in length, there can be only 2^n different
possible files. So for example a 64 bit machine could theoretically address 16
exbibytes of memory. There can be only 16 exbibytes worth of possible programs
that would run on said 64 bit machine.
2. A finite sequence of binary digits is a finite number, also capable of being
expressed in base 10 numerals.
3. A number is not patentable. (However it may be copyrightable.)
4. All programs are numbers, and therefore not patentable.
Q.E.D.[ Reply to This | # ]
|
|
Authored by: Sander Marechal on Thursday, November 12 2009 @ 07:01 PM EST |
the remarks about LISP in this article makes me wonder. Isn't it trivial to
actually prove to lawyers and judges that software is math?
The big stumbling block seems to be that most software is a series of
instructions, while math is formulas and expressions.
So, take any piece of software and write the equivalent in LISP. You now have a
set of S-expressions that you can solve/reduce step-by-step. It's simple lambda
calculus and squarely the domain of mathematics.
This is also interesting in reverse: Any software implemented in LISP cannot
possibly infringe a patent because it's clearly just a mathematical expression.
:-)
Perhaps this can easily demonstrate (in an easier way) that there is no
difference between software-as-a-set-of-instructions and a mathematical
expression. After all, LISP is turing-complete so *all* software can be written
in LISP :-)
---
Sander Marechal
Geek, Programmer and many more, but not a lawyer[ Reply to This | # ]
|
|
Authored by: Anonymous on Thursday, November 12 2009 @ 08:44 PM EST |
There is one thing about software that I must continuosly come back to in any
discussion about software vrs patents.
No matter what - software does not function in a void. Software requires a
computer (hardware) before the process described in the software can be
understood. It is simply not possible to seperate software and come up with any
type of meaning.....
For example: Lets take the most basic of assembly instructions
LDA #0
So I am loading the accumulator with the value of 0.
Tell me what this means in the terms of a method you can follow?
This instruction could make a white dot appear on your favorite video display,
could disable the mouse .... could do anything you could imagine since we don't
have a computer from which to run the instruction and get our result.
This instruction can also produce different results depending on the hardware it
is run on.
So without a specific set of hardware, software holds no value - makes no
sense.
All these Lawyer descriptions of how computers are laid out as a series of
switches misses the point - unless you know how those switches are laid out,
only then does the software sequence start to have value. Bring in a different
machine with a different layout and the software you just produced will not
work.
Since software does not operate in a stand alone capacity. How can it be thought
of as a patentable invention?[ Reply to This | # ]
|
- Simple Terms - Authored by: Anonymous on Thursday, November 12 2009 @ 09:21 PM EST
- Simple Terms - Authored by: Anonymous on Thursday, November 12 2009 @ 11:25 PM EST
|
Authored by: Anonymous on Thursday, November 12 2009 @ 09:29 PM EST |
The fundamental lessons of it for patent lawyers are:
(1) A computer is *intended to* do information processing which a human could
do, but very fast. *Any* such information processing. *All* such information
processing. Adding different software is what it's *designed* to do.
(2) Therefore, adding "plus a computer" to information processing
which could theoretically have been done by a human does not fundamentally
change it, *and is obvious* -- it should be evaluated exactly as it would be if
it were done by a human (however difficult this might be). Just like "plus
writing it in a book" doesn't make a story patentable.
There is a second aspect which has to be done to prevent the patent system from
issuing disastrously bogus patents: business method patents have to go away.
(They are a combination of abstract, purely mental, though not necessarily
mathematical, algorithms and laws of nature.) If judges understand computation
theory, they'll understand that fundamentally all software patents -- the ones
we're trying to get rid of -- are business method patents "plus a
computer".
We have to knock out *both* the business method patents *and* the "plus a
computer".
Fundamentally both are information processing. Information processing consists
of *nothing but* abstract ideas, and is therefore totally unpatentable.
This still allows patents on a novel industrial procedure which has new aspects
*outside* the information processing.[ Reply to This | # ]
|
|
Authored by: ewe2 on Thursday, November 12 2009 @ 11:01 PM EST |
Firstly, you need to do a doctorate :) I learnt more maths with this intelligent
instruction.
Although it seems clear to me, I see some respondents resisting the logic of
your argument, specifically "discovered" vs "invented".
Allow me to present a little parable of my own:
Take 2a * b = c. If a = 2, b = 2 then
(2x2) x 2 = c
4 x 2 = c
8 = c
Try solving the equation using addition instead of multiplication:
(a+a) + 2 + 2 = c
4 + 4 = c
8 = c
Has anything changed? No, we have just noticed a new way of expressing the same
relationship between a, b and c. This is what I take to mean
"discovery" because it certainly isn't "invention". We can
infer that multiplication is a quicker form of addition but we could continue to
add if we chose (in fact many logic chips still do).
Or perhaps you'd prefer solving for b:
b = c / 2a
2 = c / 4
c = 4 * 2
c = 8
The relationship stays the same regardless if you use subtraction, addition, or
even the terms on different sides. This is discovery not invention. Just because
you "found a different way" (ie discovered) to look at the
relationship makes no difference to the relationship itself. As Perl teaches,
"there's more than one way to do it", which implies there is no unique
(therefore invented) way either.
Some respondents have tried to take the argument to the computers themselves.
Using first and second examples above, if machine A can do the first equation in
its hardware but machine B has to do the second equation because it can only add
and not multiply, is this invention? Only from the machine's side, it makes no
difference to the programmer.
Additionally if machine B prefers an equation of the form b = c / 2a because for
some reason its faster than using addition to find c, is that invention? No, the
maths is the same, its an issue of machine implementation.
These issues came up often during the early years of Unix porting, and I'm sure
you can find a hundred examples. In all cases, Unix on different hardware wasn't
an invention. And this is the low-level stuff: if this cannot be invented, how
can anything derived from it be?[ Reply to This | # ]
|
|
Authored by: Anonymous on Thursday, November 12 2009 @ 11:02 PM EST |
>>>Software is discovered as opposed to invented.
>>>This is a consequence of the fact that software is
>>>abstract and software is mathematics. You can't invent
>>>abstract ideas and mathematics. You can only discover them.
This assertion asks for the question, how are you defining invention? Edison
demonstrated that invention is often search. 10,000 filaments to find the one
that works.
Invention is often just discovery, but one that follows a lot of search, and
learning the parameters of the space, until you can start to make leaps of
"intution" because you start to recognize patterns.
The legal question is whether the results of this search should be protected to
benefit the searcher so that other searchers will have the incentives to invest
time into said search.
[ Reply to This | # ]
|
|
Authored by: BitOBear on Friday, November 13 2009 @ 12:15 AM EST |
I have said this before, and I'll say it again. Computer Programs are _not_
math. Computers were, indeed, invented largely in the math departments of
schools, and many of the words so used are from the world of math. But beyond
the fact that all reality can be modeled mathematically if you know what you are
doing, there is nothing more "mathish" to 'print("hello
worldn");' than there is to "mow my yard!".
The key is to really _listen_ to the words persons well versed in the art of
programming used to describe their work.
We _write_ software.
Compilers _translate_ high level _languages_ into machine _language_.
Useful routines are stored in _libraries_.
The list is pervasive, and all the features of computer software tie back to
_language_ theory far more than math. If the law would stop listening to the
people trying to justify things for business purposes and just start listening
to the actual words we use, then the truth would just sort-of fall into place.
Software involves no invention, no discovery, not unless Margaret Mitchell
_discovered_ "Gone with the Wind." And Software shouldn't be
patentable unless Ian Flemming deserves a patent on the spy novel.
The fundamental confabulation is that people insist on viewing
function(software) as if it means function(math) instead of
"procedure".
The first thing you must know and understand about software is that it is all a
complete _fiction_. The letter "A" is, by convention, by standard,
just the number 65, and "B" is 66, and so on. The fact that I can get
"B" by doing "A" + 1 doesn't imbue "A" with
mathiness.
Consider you Windows or Mac Desktop. The key word is "Desktop". A
_metaphor_, just like "file" and "trash can". Email is just
mail. Text Messages are just messages. Records are records.
In fact, everything you can do in/with a computer you can already do on paper.
Draw a picture? yep. Store medical records? yep. Bill a credit card? yep.
None of the "computer theory" matters.
Computer chips, and computers in general, are endowed by their creators with a
finite and fully defined list of instructions. Everything the computer can do,
it can do intrinsically.
In the world of Reducto ad Absurdum, All computer chips either do electrical
"and", "or", and "not" (usually as
"and-not" or "or-not" chips since you don't need all three
in any one theoretical design.)
At the machine code level instructions are all verbs like add, move, exchange,
compare. Each instruction takes a number of source and destination things, that
is "nouns". Every useful instruction sequence is a sentence, or in the
terms of the art "a statement".
Higher level languages are just more-vague instructions that the compiler turns
into very specific sequences. 'print("Hello world");' is put the
address of the "H" into a register, move what that register points to
into a temporary store, if it is zero then stop, otherwise cram that temporary
store value into the output place, move the pointer to the next address, go back
to the move instruction.
Words. All words. Word salad. Creative use of those words is why two people can
"write the same program" and end up with completely different results
that run in different amounts of time. There are good programmers and bad, just
as there are good authors and bad.
The reason that software patents are bad is that they _don't_ describe the
software, to come close a software patent would have to be at least as detailed
as Cliffs Notes for a novel. But just as Cliffs Notes are not enough to really
be the book, a notes version of a program still wouldn't even really describe
the intricacy necessary to function. After all if you handed the Cliffs Notes of
Gone with the Wind to dozens of authors you'd get dozens of different things,
none of them the original, probably none of them even close.
The reason that almost every programmer on the planet goes crazy when a software
patent is issued is simple. We keep telling you. They are taking things from the
real world, adding on the fact that they are doing it "in ow with a
computer" and claiming it is a new thing.
Contracts are not patentable. Novels are not patentable. Short stories are not
patentable. Software is inherently not patentable.
I draw a picture of cake on the screen, the cake is a lie, do I get a patent on
cake?[ Reply to This | # ]
|
|
Authored by: m_si_M on Friday, November 13 2009 @ 12:24 AM EST |
Since it can't be ruled out that the article will be quoted in a legal document,
I suggest the use of uppercase letters for all terms of art, like e.g. Effective
Method.
As we have learned, some lawyers love to use quotes out of context. The use of
uppercase letters would indicate that the meaning of the term is not susceptible
to legal interpretation, but is clearly and narrowly defined in mathematics.[ Reply to This | # ]
|
|
Authored by: Ian Al on Friday, November 13 2009 @ 04:57 AM EST |
I need some help from you set theory geeks to solve a practical problem of data
storage.
The universe in which the sets exist is not homogenious. It comprises serial
domains of addressable groupings of binary, fixed length datum repositories. The
boundaries of the domains have no relationship with the data stored. Don't worry
about the data. It is symbolic in nature comprising binary words of length
compatible with the granularity of the storage universe. The symbols could
represent anything including other symbolsets.
The sets are all special cases. There can be no unions between sets. The datum
in the data sets must be kept in single file order to maintain data integrity so
I will call these 'filesets'. There is just one superset per universe which
contains pointers to complete filesets. I will call this superset the 'Fileset
Access Table' for that is its function. For the contents of this table to be
easily found, it is confined to a set of universe addresses which is
standardised and cannot be used by filesets. It is essential that filesets can
be added to, and copied and removed from the universe. This is the purpose of
the FAT.
I want the filesets to be distinquished by a naming syntax because real people
will need to select specific filesets of data. To start with a modest number of
filesets I am defining the naming syntax as a series of eight alphanumeric
characters followed by a '.'. Three subsequent characters are added as a guide
to which symbol set is used within the fileset. Because of the nature of the
universe in which the FAT is stored, I propose to represent the fileset names by
binary representations of the ASCII character values. We might as well stick to
some sort of standard, here.
OK, that's about it. Any ideas? When we have come up with the optimum workable
solution this will be a handy addition to the storage of and access to symbolic
data sets. Since it is all completely defined by your set theory and algorithms
it should be easy to get a patent for it.
---
Regards
Ian Al
Linux: Viri can't hear you in free space.[ Reply to This | # ]
|
|
Authored by: Anonymous on Friday, November 13 2009 @ 03:13 PM EST |
Maybe this will open some eyes.
1)Software is patentable.
2)Binary software files are a(large) number.
3)Numbers can be encoded to correspond to symbols (Gödel).
4)Books consist of symbols.
5)Books are patentable.
Woops![ Reply to This | # ]
|
|
Authored by: Anonymous on Friday, November 13 2009 @ 05:09 PM EST |
1) Whoever has the gold makes the rules.
2) Whoever makes the rules can get the gold.
Sad but true.
I don't think fine points of mathematics are at issue here. Look at the reasons
given for excluding basic things like laws of nature or mathematical truths. It
is about how patenting them would be too broad, give too great a monopoly. It
is their size and scope which led to the rule, nothing special about
mathematics.
If there is any hope for reason or common sense I think the path for getting
software out of patents is the observation of how they form such an impenetrable
knot. Anyone actually doing work in software creates new things comparable to
what the patent office is handing out patents on every day. Any useful bit of
software steps on these patents unknown and at great frequency. Society is not
gaining from disclosure, only creating a land grab. These are practical
arguments unlikely to be aided by mathematics. That isn't to say all that
theory from mathematics isn't interesting and useful. It surely is, but not I
think to guiding policy.
Once again we face a situation where Congress does not give us clear law. We
have enormously important policy decisions belonging to the proper
interpretation of the word "process". Congress should do that job,
not the Court. However, if Congress does not then the Court must.
My own view is patents are fundamentally flawed right at the core and this flaw
extends to all fields of work. The flaw is preemption. It is not necessary to
copy a patent to infringe it. You can break the law doing your own invention,
your own original work. All it takes is someone getting to the patent office
first. It is fundamentally unjust.
I am aware of all the arguments about how it may be impractical to recognize
independent work or how things become obvious once known. It is a point of
human judgement to weigh these things. I regard them as inconsequential
compared to the fundamental injustice of denying people the fruits of their own
work. That is the ability to use one's own work, freedom as Richard Stallman
would explain, as opposed to the power to forbid anyone else from using it.
A just patent needs to pass the test of "We would not be able to do this
had the inventor not taught us." Pass that test and patents don't take
away from the public. Ignore that test and they are as close to extortion as
two peas in a pod.
[ Reply to This | # ]
|
|
Authored by: Jeays on Friday, November 13 2009 @ 09:54 PM EST |
At the risk of jumping in where angels fear to tread - software is just a
list
of instructions.
As in a recipe - a list of instructions for making a given
food item. They can be
followed by a human being, and one could imagine a
machine which would cook a
meal given a recipe and a stock of ingredients. The
machine might well be
patentable, but not the recipes.
As in a Jacquard
loom, which has a list of instructions on punched cards
for describing the
pattern to be woven.
As in sheet music - a list of instructions for what
notes are to be played at what
times. A human being can play the music, and one
could imagine a machine to do the
job, if it hasn't already been built. You
could patent the machine, but would be limited
to copyright to protect the sheet
music.
As in a CD player - the CDs hold a list of instructions about what
sounds to make. Trying to
patent a CD player that contains a CD for playing the
"Bolero" as a "Bolero-playing-machine", and
claiming this as a new
machine,
would stretch the definition of 'new machine' well beyond breaking
point. It is the same machine,
with a particular list of instructions.
As in
biology - DNA holds a digital description of complex molecules that are
to be
constructed. Genes are just data, and it is hard to see how they can be
the
subject of patents.
As in software - a list of instructions for a computer
to execute. There is something
quite novel here, because a computer can treat
its own list of instructions as data,
and modify them, or write new ones.
Therein arises much mathematical complexity.
As in software for a new
machine in the not-so-distant future - a universal fabricator.
We already have
limited examples, such as the 3-D printers that can make arbitrary objects
out
of plastics of some kind. It would be quite feasible to build a machine that
can
assemble arbitrary Lego models according to an externally-supplied
description. Once such machines
can make any object, including complete copies
of themselves, the world will
change dramatically. John von Neumann proved that
this is possible, and the proof
is based on the separation of the part that does
the fabrication, and the list
of instructions it requires. The list is used
twice; once when it is executed, and
again when a copy is made for the new
machine. What universal fabricators will do for patent law is
an interesting
question.
So to conclude - the paradigm of a list of instructions that can
be executed either
by a human being or a machine has been around for a long
time, and is a fundamental
idea. The machines may be patentable, but lists of
instructions should only be
protected by copyright. [ Reply to This | # ]
|
|
Authored by: Anonymous on Saturday, November 14 2009 @ 12:19 AM EST |
From Linux Today http://www.linuxtoday.com/developer/2009111302735OSLL
*****
For those that want to get to the nontech component of the argument, skip to
"Software is Mathematics" near the end.
Additionally:
The computer is a physical contraption that allows us to see the net effects of
the software/math (calculations and also side-effects like the painting of
pixels on a screen). The computer has parts that are patented. Software cannot
be patented.
The hope of most software patent supporters is that loading up a computer with
math and executing those instructions (including the driving of the visuals and
sounds from connected peripheral devices on cue from the software/math) can be
patented by way of calling these effects (that result from such software) a
"process".
Thankfully, it's possible that the SCOTUS will not buy that line of reasoning
(and instead use a stricter definition of "process" within patent
law), though there are many sticky issues they face.
Note a key distinction that PoIR tries to explain. Math is used to analyze and
come up with physical contraptions that work. Many of these contraptions are
patentable subject matter. The computer is one such contraption. There is much
physics and math that is leveraged in building a computer that works.
However, what we have with software is different. We are actually feeding math
into the contraption. Software patent supporters want to patent this as a
process: the physical beeps and blinks that can be cued directly by the math as
the math is being evaluated automatically by a machine (eg, by a PC).
The things the software patent supporters want to patent are the virtual
creations that such an organized math allows us to perceive and utilize. They
want to patent the abstract model components (like files and virtual desktops
and "security systems"...) simply because things like a PC allow us to
"touch and see" these abstractions.
That was quite an effort by PoIR, btw. I noticed this person complain numerous
times how the viewpoint of patent attorney Gene Quinn and perhaps others was
missing the target.
*****
-- Jose_X[ Reply to This | # ]
|
|
Authored by: Anonymous on Saturday, November 14 2009 @ 09:02 AM EST |
Another way to look at this is when people say they have invented a new
'process' that's not strictly true; what has happened is they discovered
a process and invented a machine to implement that
process.
This is very true in IC design, where we forever are in search
of more speed or smaller geometries; what really gets invented (and patented) is
the machinery/materials that make the discovered process
possible.
Cheers
PeteS
[ Reply to This | # ]
|
|
Authored by: grundy on Saturday, November 14 2009 @ 09:01 PM EST |
An excellent, very interesting and informative article.
One of the
worst things about patents is that they are written in a language that is never
used for any other purpose and does not closely resemble either English nor
ordinary Legalese, which means that the audience for this essay must be the
Judges, not the patent lawyers.
I expect this essay could be even more
powerful if it were edited by an English Major or professional copy editor
with very little or no knowledge of mathematics. [ Reply to This | # ]
|
|
Authored by: Anonymous on Sunday, November 15 2009 @ 07:45 AM EST |
If I were a Judge readying and analysing your paper, would be required to read
and understand that are your primary arguments, facts and definitions to support
your claims.
My background is electronics engineering, and microprocessor design and
programming, working in the fields of radio communications, computer systems
engineering, SCADA and scientific instrumentation design and manufacture. I buit
my first microprocessor system in 1978, and a 4-bit binary adder in about 1972
using type 3000 telecom relays.
I have been formally trained in microprocessor and computer design and
programming, I presently work my own business as an engineering consultant.
I fully understand the application of mathematical abstracts to describe
physical machines or to evaluate, reduce and analyse designs in the abstract.
First each point, as briefly as possible.
<blockquote> <b> All software is data.
</b></blockquote>
Here the Judges are going to require and seek a more dininitive statement, if
you say "All software is data".
They are going to seek the definition of software and the definition of Data,
they are also going to turn it around and as the opposite question is "All
data software".
What is data?
Data is information, usually expressed in some abstract human understandable
form. Data can be how tall you are in CM's but that data that number is an
abstract definition and a form of notation that can be related to how tall you
are. It's is NOT actually how tall you are.
If I say im 163cm tall, that is an abstract statement that is the definition of
my height.
"163" and "I Am" and "Tall" are all abstractions
that you apply mentally to order and evaluate "Data".
If someone says to you "Im 163", that has no meaning, you can make an
assumption that it's something, but you would have to guess.
You do not have a complete data set for you to gain any practical information.
Not until you tie the abstract decimal number to your height in CM does it make
any sense.
This is the differnce and the definition of "Data" it's information
that you can act upon.
<b> What is Software? </b>
Software is a series of configuration instructions that turn on or off various
switches within the CPU, memory and I/O that run on sequential logic, and the
switches that configure the machine are transistors.
Software is the series of configuration instructions, that may or may not
perform or interact with data. software and data are two clearly distinctive
entities.
If you are required to count the number of cars that pass a certain point, you
can use an abstract and put a line on a piece of paper everytime a car goes
past, that mark is an abstract symbol representing a car passing, it's not the
actual car, but it's Data (information) in an abstract form of marks on paper,
the abstract notation you use could be Roman Numerals, decimal numbers, a pile
rocks that you add too each time a car passes, but all these are abstracts of
the actual data.
If you had to measure the number of cars passing a point on the road but were
not able to use any abstractions, like numbers or rocks or marks on paper when
how would you do it ?
You would have to gather each car, so that when your boss turned up at the end
of the day and asked you how many cars passed. You would point to the pile of
cars and say that many.
Remembering you are not allowed to use abstractions, like defining a number
system and incrementing each time a car pased.
Everything that is converted to the abstract can be manipulated within the rules
and definitions of the abstraction, but does speak of the initial physical
information apart from it being a definition and abstraction of reality.
Anything that is real is not an abstraction, the cars passing the point on the
road is real, the number you write on the paper is an abstraction of that real
event.
So Software and Data have to seperate definition, software like anything
physical can be converted into the abstract, and therefore undergo mathematical
rigor and analysis and reduction.
<blockquote><b>All software is discovered and not invented.
</b></blockquote>
This is a case of definitions and semantics, in definitions you are saying that
a discovery and an invention are different things, and your right so you have to
look at the definitions of "discovery" and "invention". A
discovery is what ? it could be an observation, or the result of a measurement
or expirement or a flash on intuition. It may be a usefull discovery or it may
be a discover of a failure. You can discover that cheese does not taste that
nice with peanut butter, or you may discover that a round rock rolls better than
a squre one.
The invention is in the taking of your discovery and making it into something
unique and usefull, therefore a discovery can become an invention, but not all
discoveries are able to be invented.
In our world the difference between a discovery and an invention is if your
discovery meets the innovation criterea for an invention.
Discoving something that has allready been discovered is by definition not a
discovery, even if done independly but AFTER the initial discovery. (it's the
"no prize for second rule").
When Edison invented the electric light he made many discoveries, and some that
required inventions from other people. He tried many many different types of
light falaments including horse hair, human hair, string by trying lots and lots
of different things he found that a thin piece of wire glowed bright but would
quickly burn out, a discovery was made what horse hair did not work, another
discovery was made what thin wire worked but burn out in seconds.
Edison then tried many other experiments along the way making variou discovies,
sooner or later he placed the wire filament in a class chamber and tried
different gasses, and pressures. He discovered that if you lowered the air
pressure the light would last longer before it burnt out.
So was the invention of the electric light a discovery or an invention?
He did not invent electricity, or wires or glass or vacuum pumps, but he
discovered that the right combination of these various components creates
something unique and innovative the invention was born from a combination of
discoveries and intuition, innovation, effort and the desire and effective
process.
The effective process in this case was to product a working light globe, this
light clobe is subject to abstraction and analysis just like any other physical
object or invention. It's not the underlying math or abstract notation that is
patented it's the "proof" and the real aspect of what you have defined
in the abstract.
If you define something in the abstract it is and always will be an abstraction
of the real.
Software is REAL it's not in the abstract, software in configuration
instructions that tell the CPU how to configure itself to perform a predetined
task.
<blockquote><b> All software is
abstract.</blockquote></b>
NO, software is the realy physical configuration information thats sets the
configuration of the CPU due to predermined rules.
It's the turning on and off of specific transistors within the memory and CPU to
define a state. This is real, and by definition not an abstraction.
Humans make abstractions, they replace a turned on transistor as a logical
"1" and a turned off transistor as a logical "0". This is
the first layer of abstraction.
The the first time we step away from the real world to the abstract realm.
1's and 0's do not move around the CPU or memory, physical switches open and
close as a real representation of the abstract binary 1 or 0.
It's a trivial for a human to convert into abstract terms, like symbols or
numbers and to be able to derive rules and methods around those abstractions. In
fast is so natural that we dont even think about it.
But it's something we do all the time, convert to the abstract and assign
symbols and rule to that abstraction.
What is not an abstraction is something that is real, a car passing a point on
the road you have to count is real, the car really did go past, you mentally
convert that into the abstract when you assign a symbol to it and put a mark on
a piece of paper, or hold a number in your head.
Software is the REAL method of configuring a system to deal real functions /
configurations on real information that is represented in the abstract form of a
number.
So your computer instruction (software) might be to say "every time a car
goes past, put a mark on this piece of paper".
this is a very low level of abstraction but it's IS an abstraction, the
instruction "put a mark each time a car goes past" in an instruction,
and relates to a physical act and a configuration, that configuration is the
mark you make, that mark is an abstraction for a car that passes.
The instruction is real, the mark is real but the mark is an abstraction for a
car, and is data. The instruction is not an abstract as you really have to
perform the instruction in the physical sence to perform the method.
With knowing the exact definition of the instruction a pure number returned
without an explination of the real aspect of the pure number does not take it
out of the abstract into the real as we do not know what the question was
asked.
If you say count the number of cars in a day, and you give him a pure number
like 200 he may make assumptions that that numbers means there was 200 cars went
past, but it might not, there might have been more, some might of been trucks,
or he may of counted every second car so you would need to apply other more data
or information to resolve.
This can be done by carfully defining the question so there is no ambiguaty,
with is using an "effective method" and being carefull to define your
method of abstraction clearly. "EVERY time a car passes put a mark on this
paper".
He could of said, "create a number system, and everytime a car passes the
point increment your number system by one, and the end of the day explain the
rules of your number system and we can evalute from the data how many cars
passed".
But this requires incite on behalf of the human, and therefore fails the 4th law
of your effective method rule.
<blockquote><b> All software is
mathematics.</b></blockquote>
It is true that mathematics can be used to convert software into the abstract,
as you can convert car going past a point on the road into the abstract realm of
math. Does not mean the cars are not real, but it does mean that your abstract
number of cars or your abstract marks on paper do not represent something
equally as real as the cars passing on the road.
Everything from calling an ON transistor on a computer a "1" and an
off transistor as "0" up to using the most high level programming
language is an abstraction of the real functions a CPU and memory perform.
To actually physically perform any operations on a general purpose computers,
CPU and RAM/IO combination is to break the abstract definitions of the
instructions and methods into REAL configuration information to allow the
CPU/Memory to perform is predefined task in the real world, not in the abstract
layers and world that humans have created to more easily understand and
configure these specific sequential state machines. (computers).
If you analyse the exact sequence of events of even the most simple of computer
programs, you see no abstraction at all. You see a sequence of specific
configurations that perform a predetermined function.
The CPU has no concept of data, or memory or function it simply configures it's
system to meet a predefined configuration, that has been designed to meet a
basic physical function.
2 + 4 is an abstract concept, bound by the rules of math and the decimal number
system. The numbers "2" and "4" are an abstract for
"something". It could me 2 cows or 2 trees or just 2 as a pure number,
but its a human conceived abstract where the symbol "2" means
"two of something" or just "TWO".
Even saying "two cows" is an abstraction, it's abstract words
describing a number of objects. (and a brand of beer).
saying "within that memory location is the value 2"
This is another abstraction, if you look in that memory location you wont see
the symbol or value 2, if you put a meter of the bus or in the chip and looked
at the voltages on each of the data lines you would read
0v 0v 0v 0v 0v 0v 5v 0v
so the CPU and has turned on all but one of the transistors in that memory
location
we make our first abstraction and call 0v a logic ZERO and the 5V (on transistor
in the configuration) and a Logic one.
So our first level of abstraction is the definition of an on transistor on a
specific bit at a specific memory location is turned ON, and that from that
transistor being turned on you make the abstraction that that byte of data at
that memory location on which that bit is turned on represents the decimal value
of "2".
This is your abstraction, the computer, memory and CPU do not work in the
abstrat but in the real and physical.
All software is data.
(Data is gathered, and can be an abstract representation of a physical event, it
can be the number of cars that pass a point in the road, or the current
temperature but it is abstract in nature, the Nuber 34Deg C is an abstract of
the a temperate expressed in specific units. It's DATA.
Software is the real and specific sequence of configuration instructions that
allow the manipulation of information by means of it's absolute configurations
and the specific order of the software instructions, and the specific
configuration of the hardware.
All software is discovered and not invented.
There are clear distinctions between discovery and invention, you can invent
discoveries if they are new and unique and practical and meet the requirements
for patents.
There is no distinction between how you invent something regardless of if it's
from invention or discovery.
A discovery need not lead to an invention, and a discovery does not preclude
something from not being an invention.
An invention is the application of an idea or discovery in a new and practical
way.
********************
You then go on in great detail to explain first how hard computation theory is
and how you base your argument that high levels of abstraction, and mathematical
analysis including Lambda calculus is showing how software is math.
We have allready established that math is an abstraction, you even say so
yourself.
So the fact you can break down algorithms into effective methods that can run on
a turing machine (another abstraction) does not make the actual software or
function of the algorithms any less real.
All software is abstract.
Software is real, it is an exact set of instructions that direct the CPU and
memory to enter specific states, those states are used in combination and in
sequence to perform the operation of the function.
This series of instructions is what turns a general purpose device into a
special purpose device.
Being able to convert that real set of instructions into abstract concepts and
apply abstract rules to those abstracts does not make the original system itself
abstract, the real configuration information that is called software exists in
the phycials world in the form of turned on and OFF transistors.
High level and any level of programming a computer that does not directly
involve the configuration settings are an abstraction that has to be converted
into real configuration information in software (also real) for it to function.
A computer does not have 1's and 0's running through it, it has states or ON and
OFF transistors
"All software is mathematics. "
All mathematics is an abstract representation of some physical or pure entity,
the number / symbol "2" is an abstract for the number two, the word
"number" is an abstract for a quantity of something.
These are all abstractions.
if you say you have 4 apples on a table, and you really do have 4 real apples
on the table, Saying "I have 4 applies on a table" is an abstraction,
the number 4 is an abstract for the value "four of something" and is a
description of something physical, there is another abstract assigned to
"apple" the word apple is not a REAL apple it's an abstract definition
of an apple.
Software is in itself not abstract but the definition and methods of
development, construction and design can be performed in the abstract world to
achieve the non-abstract (ie Real) sequence of configuration instructions that
make up a software program or the set of instructions that make up an
algorithm.
<b> Inventing and patenting a new bridge design </b>
Consider you make a discovery and with inventive and innovative initive and by
your own logical reasoning you invent the design for a new type of bridge, one
that is much lighter and cheaper to make, and can span any distance.
You can use pencil and paper and by using language, math and diagrams and
intuition you can create an abstract model of your bridge, you can then apply
the laws of physics, and the abstract of math the analyise your design and
invention, you may invent specific construction methods or design methods that
are unique and could be patented.
But you cannot build the bridge, and take the patent office to the bridge and
tell them to allow a patent on your invention.
You have to transform your design and invention from the physical to the
abstract, you have to reduct your design to math and abstract terms and
descriptions including possibly diagrams.
You may have to take the results of your abstract analysis and build test models
to ensure your abstract design and analysis is real world correct.
Or you may be able to use the abstract rules of analysis to refine and improve
your initial desgn.
But applying analysis, and abstract idea's to a real phycial object does not
change the fact that the object of you're abstract analysis is no less real.
Or it does not exist in the physical world.
The only thing that does not live in the physical world is what lives in the
abstract, and the abstract can represent the physical or it can be pure
abstract.
So yes, software or a bridge can be expressed in abstract form, but in
themselves they are not abstract.
Abstractions are what humans need to do to make thigs more understandable and to
allow manipulation of idea's and Data/information in specific ways.
But those abstractions are just that abstract, the number / symbol "5"
does mean anything real or abstract.
computers do not deal in the abstract, or in numbers, they deal is specific and
exact known states and configurations.
All abstractions are to simplify the understanding and configurations in manner
more easily understood my humans.
Our highest levels of abstraction is human intuition, your own mind is very good
and abstraction it does it without effort and automatically. Our own speech in
an abstraction of the real world, if you hear the word "tree" in
instantly relate that abstract of the word "tree" to a real item of a
tree.
But the word "tree" and the mental picture in your mind of a tree is
an abstraction and may or may not represent a REAL tree. Dont have enough
information to make that establishment between the abstract and the real.
Computers cannot handle abstractions, and software is not therefore by an
definition abstract.
The actual code is absolutely real, physical configuration information used by
specific CPU and memory configuration to perform a special function. Turning a
general purpose computer or machine into a special purpose devive that is
special purpose based in the unique configuration and sequence information.
Not even the abstract of pure numbers or binary numbers are known to computers,
there are no 1's and 0's running in your CPU, that is the first layer of
abstraction humans attribute to the real as a means of easier understanding.
A programming language like assemby or C or fortran are further levels of
abstraction further isolating you from the underlying real instruction sequence
that is necessary to make your CPU and memory perform a specific function.
The C programming language is a layer of abstraction it allows humans to more
easily manipulate the configuration information that are directed to the CPU and
memory, and as another layer of abstraction makes the process of defining and
creating and documentating your method.
But it does not change the underlying innovation in your orignal design.
If I design a device that uses combinational logic (AND, NOR, OR gates and so
on) that performs a specific function, and that function is new and innovative
and ableto be patented. If I perform a conversion from the real circuit and
create that circuit (before I build it) in the abstract, I might start by
drawing a circuit diagram of my design, this is an abstract definition of a real
physical device.
Once i have the design and abstract circut worked out, I might apply a set of
equations to my design I might use a form of calculus and math to define my
design in the abstract, then I might raise the level of abstraction and convert
my circuit diagram full of logic gates to boolean truth tables and boolean
expressions a further level of abstraction.
Then I might use my knowledge of the abstract boolean algebra to reduct my
design down to the minimum number of gates and still achive my function, (AND NO
LESS).
So you have invented a ciruct that does a function, you have converted it into
the abstract of a circuit diagram, and mathematical equations, you have used the
abstract math and circuit diagram to refine your invention but maintain it's
functionality.
Then you can take your abstract design, and result of your analysis and
reduction and build a real physical device.
This process is exactly the same is the invention of a method or algorithm, or
in the invention and creation of software.
It does not matter how you achieve the end result whether by abstraction,
invention or discovery, what matters is the real and physical end result or
product/innovation/invention that is what you consider as the final
"proof" in your analysis.
The analysis of an abstract does not make the physical and real item any less
real.
The method of achieving innovation has no reflection on the reality of that
innovation or invention.
If you invent something in the abstract or by using an abstract method it may be
able to be converted to the real world.
This is why C code instructions which are abstract representation and a more
human readable form of abstract instructions can be compiled on different CPU's
and memory configurations.
The final compiled code for each system is differnt as the CPU's and memory
configuration are different, the same real and raw computer configuration
instructions for one CPU are different from another.
But the high level abstract definition of the algorithm is abstract and not
real, it has to be converted into the very specific and real configuration
instructions for the particular CPU and memory, to allow the CPU to enter a
specific configuration, with certain transistors either turned on or off.
Real computers do not work in the abstract, the abstract has to be converted
into absolute and precise configuration which means the exact combinition of ON
transistors and OFF transistors, to create specific real voltage levels.
Our first level of abstraction is in defining the presence of an ON transistor
and the 5 volts on the bus to represent the abstract of the binary logic level
"1".
The highest levels of abstraction moving AWAY from REAl is assembly, first gen
programming languages, second gen programming languages, and may evolve to
levels on abstraction where you can just tell your computer to calculte the
following two numbers.
But before a computer can actually perform any operation it has to take those
abstracts and convert them to real configuration / information settings and the
correct sequence of turning ON and OFF transistors to achieve a specific
result.
lets look at a very simple computer program (software) to add two numbers, oh
sorry a number is an abstract term and a computer has no concept of the
abstract.
So in computer terms what do we need to do to achieve a calculation of 2 + 3.
Start with understanding the term 2+3 is a pure abstract as numbers themselves
are abstract and can be a representation of a physical entity or not if it
represents a physical entity it can be called data.
So you want your cpu to give you the result of the abstract calculation of 2+3.
The computer does not have any concept of the symbols "2" or
"3" and "+" or that the term 2+3 means "take the
decimal abstract number 2 and "add" it do the decimal abstract number
3 and return the result".
The computer deals with these abstract concepts by converting it into
"real" objects, such as a transistor being ON or OFF.
So well use abstraction (are using abstractions) here in our description.
Lets look at how a simple CPU performs our abstract equation.
Well make it a bit simplier and assum a level of abstraction of say an ON
trasisitor and 5 volts on a bus represents a logic 1 and 0volts and transistor
OFF represents a logic zero. (keeping in mind this is also an abstraction).
There is no instruction for a CPU or computer that says "add 2 and 3
together and return the result of that calculation".
Computer have no concept of the abstracts of "numbers" just as they
dont can any concept of "words". they only know how to set up it's
internal configuration based on a specific set and rules that it performes in a
sequence of events.
So now you have to invent and design a method from going from the abstract of
math to the real of transistors beign turned on and off and in translating that
abstract however methods are employed is the basis for real computer programs
derived from the abstract and theoretical.
<b> create a computer program what add 2 and 3 abstract numbers and return
a result </>
First: you need to create an abstract model of what you are trying to achive,
then you have to realise that abstrct concept into real terms that your CPU can
configure itself to achieve.
The invention of your design is how you go about achieving your function or
algorithm within the confines of the functions you have.
So you have a cpu and some memory connected to it. you have to create a set of
configuration information that if performed in the correct sequence with result
in performing your calculation.
(2+3)
so in a location of your available memory you set up a configuration of ON and
OFF transistors that represents the binary abstract word of 0000 0010 with for
you represents the abstract concept of the decimal number 2 as defined by the
abstract symbol "2".
so there are two layers of abstraction from the real ON/OFF transistors and the
absence or presence of 5 Volts and you abstract to a logical "1".
which is a real binary value and in our case we use that logical 1 ON transistor
presence of 5V to on the second bit of the 8 bit word at a specific memory
location to mean the Decimal symbol abstract of "2".
Next to place the configuration of On/Off transistors in another specific memory
location the abstract binary equavilent to the Decimal value abstract number
"3".
(again just a combination or configuration of on/off transistors).
Remember (2+3), ok now we have to perform the abstact function of addition, what
we are doing is not addition at all, it's the logical AND'ing of the bits in one
specific memory location with another specific memory location providing the
results in a third location.
The CPU and memory have no concept of "numbers" or
"addition" or math, it simple performs a dumb operation according to
specific instructions.
Now the contents of the 2 memory locations (dec 2 and dec 3 in their abstract
forms).
so what you tell the CPU to do is not an instruction saying "add the two
number together" that is a human abstraction and not within the realm of
understanding by a computer.
So now you have the two "numbers" fixed in specific locations within
memory, you now have to "add" the two numbers, as numbers and ADD does
mean anything to the memory, and numbers mean nothing to the CPU and ALU
(arithemic and logic unit).
you instruct the CPU in very specific terms to enter a preset configuration, you
may state (in form of a machine instruction whish happens to be in the same for
as the "data" in memory).
CPU place the binary word in the form of voltages or not voltages that addresses
the location of the memory where the data I have to manipulate is stored.
This tells the CPU the output a binary word of a specific value on the address
bus, in fact it's a specific configuration the CPU can enter. (and usually
derived by a circuit called a program Counter.
But before the computer can manipulate data that represents the number 2, it has
to know what to do and exactly how to do it.
This is what computer code, or software is and does.
the computer will start and will jump to a predefined start vector, when means
the address bus will set up a specific address, and at that address will be the
first instruction of your program. (software not data).
The software or instruction is encoded in a form that the CPU can understand, in
this case it's a binary word that is a state of memory, and again transistors on
or OFF.
The CPU when started will read this software, this instruction and configure
itself based on it, it's not data, it's a clear instruction to do a specific
task.
So the CPU start with our program to add 2 and 3
CPU starts, CPU output a specific address word on the adress bus, CPU initiates
a "read" command to the read/write line of the memory and the CPU
reads the instruction as a binary word that represents a specific configuration,
in our case we want to increment the address bus by one and load the contents of
that address location into the a register in the CPU, or the accumulator.
the abstract definition of the instruction might me
MOV 0002,A ; move contents of memory location to the accumulator register.
still the CPU and memory have no concept of what is "in" that memory
location, but in our cast it represents the abstract decimal number
"2". but that is our abstraction a human abstraction, and the memory
and CPU cannot deal in abstractions.
next the cpu moves to the next memory location, and it's knows within that
location there is a specific instruction the CPU knows it's an instruction
because that is what it's expecting, if it's the wrong instruction the program
will crash, thats how computers work, it performs an instruction completes that
instruction and then looks for or waits to be told to perform another
instruction.
so you have the number "2" abstraction configured in the accumulator
register of the ALU within the CPU you have set the CPU into a specific
configuration. Now the CPU clock will cycle, and the program counter will
increment, as it just performed a process on data and completed that
configuration exercise, it seeks out it's next instruction, the PC or program
counter is incremented by the clock and the address bus value will be
incremented by one, the data bus returns another binary word, and the CPU reads
that word as an instruction.
In our case it may be "output the address for the memory location of the
second 'number' is located and connect that data from the data bus via logical
AND gates to the contents of the accumulator and leave the result of that
function in the accumulator."
IF the memory location of the accumulator and the corresponding memory location
contain your abstract of the number "2" and if the memory location you
have defined for your abstract number "3" are present and correct and
IF the CPU and memory have taken that configuration information and performed
exactly the tasks you have asked it to do you will achieve the result of your
abstract equation. But you have had to do this by converting your abstract
concepts into real concepts.
you also cannot reduct the real, you need to convert the design or idea into
abstract idea's that humans can understand and manipulate as required.
All the Lambda calculus or other levels of abstraction still need to be reduced
into real world functions to be completed in the same real world.
Doing something in the abstract does not make it real, doing something real can
be readily converted into the abstract and back to real. but to do actual work
you need to operate in the realm of real and not abstract.
Inventions and concepts and idea's can be represented in the abstract, in terms
of functions, numbers, notation, words diagrams, math and functions, in the
abstract you can manipulate those abstracts as you see fit or guided by specific
rule, samantics, logics, words and diagrams.
These are all abstract tools, to design, model and analyse the real in abstract
terms.
To to analyse the abstract in abstract terms.
"All software is data"
No data is data, software is specific configuration information and data is
gathered information that may be an abstraction of a real or an abstraction of
an abstraction.
the Number "2" is an abstraction, just as the value E in E=MC2 is an
abstraction or Energy in jules it could just as well be an abstraction for
"Eggs" and M might me an abstraction for "mice" but that is
a different abstraction to what is commonly know what E=MC2 is and means.
software can be expressed in the abstract, and high level programming languages
are just that, they are abstract languages that make it easier for humans to
apply these abstracts to end up with a real and working function.
again, the highest level of abstraction is either spoken or written plain
english or your native language, or the universal language of mathematics, but
in either case they are stall abstract concepts that are human aids, that have
to be converted from the abstract to the real and practical to be real and
practical.
you can program a computer without the aid of abstraction, but you have to work
on the real level of the computer/hardware/software to achieve real results.
Or can you can program a computer using abstraction and rely on the standards
methods of de-abstractification that you can apply automatically or manually to
achieve your function.
Ie, you can use a high level programming language to create your sequential
configuration information for your CPU and memory, but as some point the high
level language has to be converted into non-abstract specific configuration
information for your CPU and memory.
[ Reply to This | # ]
|
|
Authored by: Anonymous on Monday, November 16 2009 @ 02:29 AM EST |
Using an "obvious" standard is ridiculous when you consider how many
very complex analytical things get produced yearly by mathematicians and by many
others (yes, it takes "ingenuity" many times to formulate useful
theorems and to prove them). Are poems patentable? These can involve tremendous
amounts of creativity as well as knowledge of and command over a given language.
How about a well crafted piece of law or musical score or play script or film?
Do achieving the latter not require lots of hard work and skill? Aren't many
nonobvious things created daily in these fields?
Many things that aren't obvious can be discovered after days of work and
certainly wouldn't be considered "brilliant" by many. And many things
that are brilliant aren't patentable, and if they were patentable, many might
conclude 20 years was way too long.
I think the novel and nonobvious test is a bit.. misguided.
Are we giving monopolies, that is, abridging the rights of everyone on the
planet and destroying a potentially ridiculously gargantuan amount of
collaboration and further discovery, because someone concluded something
brilliant (or simply nonobvious) and filed the paperwork first? Is stopping
everyone else (including many who likely helped lead to that "spark")
for twenty years any way to promote the progress of science and useful arts? ..
and to do so when the described "brilliant" (aka nonobvious) invention
is described in claims so incredibly broad?
Do patents serve any use that helps society (after opportunity costs and
abridgment of rights are factored in)?
Aren't there alternative ways to promote innovation that do less damage?
If we aim to protect the "little guy", can't we simply condition
patents that way (ie, they can only be used to guarantee a "little
guy" some market share against a "big guy").
Can't we give tax credits or other monetary grants as a reward for someone
taking significant risks (with money) trying to innovate? Surely, getting extra
cash and having to pay less in taxes should help drive investors to the company
(not that "driving investors" should be the ultimate goal). And if
this isn't enough, example because of monopoly abuses, then doesn't it suggest
that some other laws are broken elsewhere and perhaps those should be fixed?
Do we really want to encourage an extravagant wasting of resources simply
because patents are so powerful that they can allow huge cost to be recouped and
huge profits to be made? Open source development has shown that lots of R&D,
manufacturing, distribution, and other costs can be very low and/or otherwise
distributed among many entities voluntarily cooperating, perhaps off the
promises in various copyright licenses.
-- J
[ Reply to This | # ]
|
|
Authored by: CraigG on Monday, November 16 2009 @ 03:05 AM EST |
Portions of this seem to prove rather too much. In
particular, with symbolic
systems we need to distinguish
between denumerable and
non-denumerable
infinities; basically the difference between integers
and
reals.
We can in principle sort every possible integer into a
strict
order, though the list is infinitely long. We can't
do the same with reals,
because for every two putatively
adjacent values in the ordering, there exists
a value
between them (an infinite number of values, in fact).
Now, the
number of potential well-formed sentences in any
language is non-denumerably
infinite. I would expect the
same thing to be true of mathematical
theorems.
But in any case it is true of computer programs. The number
of
possible novels, for example, is non-denumerably
infinite, and I can write a
program
printf(%s, "It was a dark and stormy night. Cobol the
butler was roused from the pantry by a knock ...")
So I'm not sure
what the whole enumeration and discovery vs.
creation business means. [ Reply to This | # ]
|
|
Authored by: kjhambrick on Monday, November 16 2009 @ 08:13 AM EST |
PolR --
A question ... You say:
Kurt Gödel was working on the
Hilbert program. He was working on a formal system called "Peano arithmetic"
that defines the arithmetic of positive integers. This is the numbers 0, 1, 2, 3
....
Is not {0,1,2,3,...} the set of Non-Negative Integers while
{1,2,3,...} is the set of Positive Integers ?
-- kjh [ Reply to This | # ]
|
|
Authored by: Anonymous on Monday, November 16 2009 @ 03:29 PM EST |
The explanation of recursive functions is flawed. Author gives the "two
rules" for recusive functions, then shows an example with several steps. Step
number six is:
(0''''')''' = 0'''''''' because the parentheses
can be removed at this point
Yet there is no rule that tells
you when parentheses can be removed, or even a rule explaining what parentheses
mean.
[ Reply to This | # ]
|
|
Authored by: Anonymous on Tuesday, November 17 2009 @ 10:20 PM EST |
>> .. with the fact that every single change of state the CPU goes through
is describable by a math function.
A distinguishing characteristic is that the "state" of the CPU and of
associated memory is precise to the highest degree. Digitalization has made the
state of electronics modeled precisely. The machine fits the abstract model (the
abstract machine) and vice-versa perfectly for all intents and purposes.
This contrasts with the mathematical description of material objects which is
much less precise. We have a limited knowledge and control over nature..
..but we have absolute control and knowledge of a computing machine, at least as
far as analysis goes with respect to normal functioning (since all machines do
break down).
Digitalization means the all-important information (the "product") is
the abstract software which can be transfered at lightning speed across the
world at essentially zero costs.
Truly, the Internet and the PC has given ordinary folks the ability to create
significant products for mankind. This contrasts sharply to the wonderful work
done by factories, as most people have very limited roles in factories since
it's extremely costly to retool (reprogram if you will) a factory. Access to
factory retooling is essentially denied to all but a handful. And the retooling
is frequently very costly to enable.
Yes, there is much on the line when it comes to software patents' overstepping
and abridgment of rights and of newly-gained accesses by society.
-- Jose_X
[ Reply to This | # ]
|
|
Authored by: Anonymous on Wednesday, November 18 2009 @ 11:16 AM EST |
By removing the incentives for most people to pursue an area (the area covered
by the extremely broad patent), each new patent serves to slow down the pace of
innovation faster and faster as more patents are taken out until a saturation
point is reached. That saturation point depends in part on how aggressively the
patent holders decide to use the patents.
There are many inefficiencies in such a system -- obviously -- with the huge
number of removed rights and incentives.
For software, the opportunity costs are that much larger than say for an area
related to industrial manufacturing because of the very large number of creative
minds that could otherwise participate in practice (only in the software/PC
case).
Slowing down the pace of growth lowers the risks that incumbents will get
displaced or seriously challenged or that their relative handful of employees
will have to worry much at all about losing their jobs to the large number of
(figuratively) hungry folks outside.
And, sadly, the question should not be about losing a job among a lower number
of jobs (something unlikely to happen if the job holder can be political). The
big problem is that more jobs won't exist because of all the inefficiencies,
loss of incentives, and many new illegalities.
This model approaches the Communist approach (where the number of paths to
success are very limited and in most cases there is a large number of people
ahead of "you" in line for the positions).
Too many removal of rights will lead to a lot more law-breaking, forcing the
government to move in the direction of ruling with an iron fist or else allowing
a mockery to be made of the law.
And the Internet and computers will go severely underutilized (at least in
relative terms).
BTW, in the early stages, there will be a temptation to think that the patent
system is creating incentives. But eventually, the chickens will come home to
roost.
Well, gloom and doom aside, those that brought patents into the Constitution
were careful and left a way out for all of us: [Article I, Section 8:] "To
promote the progress of science and useful arts...."
Hopefully, we will be able to identify what hurts us from what helps us.
[ Reply to This | # ]
|
|
|
|
|