Friday, 30 November 2012

3.1.4 - Turing Machines

Turing Machines

  • Backgroud of the turing machine
  • In the early twentieth century, mathematicians were trying to find a universal machine a mathematical model that consisted of a small number of basic operations and a machine that could perform all these operations.
  • The machine would then be able to run any algorithm.
  • This model would be a precise definition of the term algorithm.
  • The hope was to find algorithms that would be able to solve all mathematical problems.

 
  • Solvable and unsolvable problems
  • In 1930, Kurt Gödel showed that there are mathematical statements that it will never be possible to either prove, or disprove.
  • Alan Turing then showed that it would not be possible to write an algorithm which could determine if a given mathematical problem (or computer program) was solvable, or unsolvable
  • Alan Turing was a British Mathematician who, in 1936, wrote a paper called ‘On Computable Numbers’.
  • This paper introduced the concept of the Turing machine – a universal machine capable of solving any problem that can be represented algorithmically.
  • He used his machine to show that it is not possible to determine if any given problem has a solution, or not. This has serious implications in computing. It means that it may not always be possible to determine if a particular problem is unsolvable, or if it has just not been solved yet.

  • How this relates to Computers
  • The work done by Turing on mathematical models was the basis from which the first computers were developed.
  • It can be shown that anything a modern computer can do can be represented using a Turing machine.

  • Turing machine
  • A Turing machine consists of
  1. An input alphabet.
  2. A tape that is infinitely long.
  3. An output alphabet that can be printed on the tape.
  4. A tape head that is used to read/write to the cells.
  5. A finite set of states, including a start state.
  6. A program (set of instructions).   
  • Another way of looking at this is that a Turing machine consists of a deterministic finite state machine together with an infinitely long tape.

  • How this relates to Computers
  • A Turing machine describes an algorithm.
  • A computer uses a computer program that is basically an  algorithm.
  • Turing machines therefore can be used to run and look at algorithms independent of the architecture of a particular machine.
  • An algorithm running on a particular machine will have had to make decisions about how to store values (e.g. the precision of how to store a real number).
  • This does not happen on a Turing machine, meaning a Turing machine is a more general representation of the algorithm.

  • Universal Turing Machine
  • Turing also realised that a Turing machine itself could be encoded and given as an input to another Turing machine.
  • This Universal Turing Machine was capable of solving any problem that can be represented as an algorithm.

  • Transition Rules
  • Transition Rules in a Turing Machine can be represented like this :
  •          δ(1,1)=(2,1,→)
  • Transition Function(Current State, Input Symbol) = (Next State, Output Symbol, Movement)






Monday, 26 November 2012

3.1.3 Finite State Machines

Finite State Machine (FSM)
  • An FSM accepts inputs and processes an output
  • Used extensively within computer science applications such as; Traffic light control systems, specifying a computer language, programs fro spell checking, networking protocols
  • A state transition diagram is used to show the FSM as a picture

  • Finite State Automato (FSA)
  • This type of FSM has a finishing state and procedures only a yes/ no output based on the final state. E.g state 2 = Yes.      Other state = No
  • FSMs with an output
  1. MeaLy machine- Output on the transitions (arrows)
  2. MoOre Machine- Output on the states (circles)

  1. For example  0|A  The input is 0 while the output is A

  
    2. C is the output for state one, A for state 2 and B for state 3







A FSA which can be programmed easily


  •  Transition table
  • Shows the states and transitions in an FSM in table format


  • Transition Function
  • Transition functions show the states and transitions as a series of functions
  • (Input Symbol, Current State) (Output Symbol, Next State)


  • Deterministic and non- deterministic FSA
  • A deterministic FSA has only one transition (arrow) for each input
  • A non- deterministic FSA can have multiple transitions for an input. So there is more than one possible route around the state transition diagram


  • FSM- Terminology Flash
  • Halting state- is a state with no outputs
  • Trigger- An input which triggers an output

Tuesday, 13 November 2012

3.1.2 Comparing Algorithms

Comparing Algorithms
Problems with different time complexities

1) Constant complexities
  • O(1)
  • Takes the same amount of time no matter how big the problem is.
  • E.g. Sorting the first 10 records in a file. Or finding if a number is odd or even.
  • IF n mod 2 = 1 THEN odd = true ELSE odd = false
  • Can have more than one action
2) Linear complexity
  • O(n)
  • As the size of the problem increases, the time taken to solve the problem grows at the same rate.
  • E.g. Cutting the grass
  • Less efficent than O(1) complexity algorithms
3) Logarithmic complexity
  • O(log n)
  • As the size of the problem grows, the time taken to solve the problem increases, but at a slower rate.
  • On sorted lists, there are faster searches than the linear search E.g. Binary search
  • Finding an item using the binary search takes on average less time than using the linear search.
4)Polynomial complexity
  • O(nk)
  • As the size of the problem increases, the time taken to solve it grows faster, but can be solved in a reasonable amount of time
  • Includes quadratic and cubic
  • Quadratic= O(n^2)
  • If a group of people need to shake hands then
                        People            Handshakes needed
                        2                    2
                        3                    3
                        4                    7
                        5                    10
                        6                    15
                        7                    21
5) Exponential complexity
  • O(kn)
  • As the size of the problem grows, the time taken to solve it grows by larger and larger amounts.
  • Only small size versions of these problems can be solved in a reasonable amount of time.
  • An example is the Emperor's Chess Board Problem
  1. The emperor was going to lose his throne to an invading army, he asked for help from a neighbouring war lord.
  2. The warlord agreed as long as the pay was good enough. Either 1/4 of China's rice crop for 5 years or the payment via a chess board
  3. On the 1st square would be one grain of rice, on the second square 2 grains of rice, on the third square 4 grains of rice and on the 4th 8 grains of rice, etc.
  4. The emperor chose the chessboard, and when the invading army was defeated the warlord went to collect his payment. The emperor started putting grains of rice on the chessboard.
  5. On the first row there was 225 grains of rice, on the second 65280 grains of rice, on the third 963040.
  6. There were several hundred times the amount of rice on the last square on the chessboard alone then there were rice produce by China.
  7. The emperor was forced to abdicate.
  • This problem although it can be solved, cannot be solved in a reasonable amount of time for anything but the smallest of inputs.



6) Factorial Complexity
  • O(n!)
  • This complexity is even slower than Exponential Complexity,
  • An example of this is the Travelling Salesman Problem
  • It cannot be solved in a reasonable amount of time.
  • The Travelling Salesman Problem uses the brute forcve method (looks at all routes for the shortest)

            Cities            Possibilities            Time taken to solve
            3                   1                           1 nanosecond
            4                   3                           3 nanosecond
            5                   12                         12 nanosecond
            6                   60                         60 nanosecond
            7                   2520                     0.001814400 seconds
            16                 181440                 11 mins
           
  • It is possible to solve the problem in an exponetial way using other methods



Agraph showing the reltionship between the size of a problem and the time taken to solve the problem



   





Monday, 12 November 2012

3.1.1 Information hiding and Abstraction

Information hiding and Abstraction
-Hiding the design complexity of a system to simplify it for a user.
-In Windows the user is unaware of the complexity of a system
-Abstraction- the process of removing unnecessary details. (information hiding) in the representation of a system.
-Reduce the complexity of a system


-Levels of Abstraction
1. Application (e.g. Word)- Hides very complex processes behind an icon
2. High level programming language- Hides a number of low level commands behind a high level command.
3. Machine code- Hides the complexity of the underlying electronics behind a machine code command
4. Electronics- Hides the physical processes of atoms and neutons with basic electronic components