Examinable Content PDF

Title Examinable Content
Course Sensation Movement and Complex Functions
Institution University of Melbourne
Pages 4
File Size 117.1 KB
File Type PDF
Total Downloads 65
Total Views 131

Summary

Examinable Content for NEUR30004...


Description

NEUR30004,))2017))))

)

)

)

)

)

)

)

)

)))))))))))Peter)Kitchener)

Dear NEUR30004 students,

I know, and to an extent anticipated, that the lectures on reasoning - from Aristotle to deep artificial neural networks - have been quite challenging for many students. On the other hand, I appreciate the interest that a number (a smaller number perhaps) of students have expressed in this topic. But “how to reason” is an important issue to consider. “Thinking” is really the most interesting thing the brain does, and mechanized thinking (ie computation) is the only other way we instantiate reasoning . “Mechanized reasoning” is one of the biggest influences of our thinking, and our lives, and will increasingly exert influence in our societies into the future. As I said when I started the lectures on the topic of reasoning, it would be unsatisfactory if, in a final year subject in Systems Neuroscience, while claiming to study the most complex thing there is - the very means by which we are aware and curious and come to have knowledge and make sense of the universe - that we avoided such topics as: • • • •

The nature of complexity The nature of thought and reasoning How we produce or conceive knowledge Whether there might be practical, or even, fundamental limits to understanding

To not at least consider how knowledge is acquired, and what (if any) knowledge is certain, and what problems are, in principle, solvable or insoluble, would be a strange omission if we want to know at a non-trivial level, how our brain works. So I believe these are important topics, and are eminently suitable topics to include within a Bachelors degree that might actually confer the attributes we claim (check out the “Learning Outcomes” section of the handbook entry for the BSc, for example). However, I am not entirely without insight into the relationship between difficult conceptual questions (such as “what is knowable?”) and students’ engagement with them; there is clearly a deep interest in a related question: “what is examinable?” While everything presented in the subject is, technically, examinable, there are certain details and even topics, that are beyond the reasonable expectations of what a student can learn in a short time with limited preparation (and competing demands, such as other subjects). So to try and distinguish between the main “take home messages” and the details that support them (which are not expected to form part of your working knowledge of the subject), I provide this summary of what is and isn’t examinable. It is not meant to be exhaustive and is a guide only.

The history of our concept of reasoning What is NOT examinable? Pretty much all of the History (Aristotle, Frege, Russell etc.) is not something you need to know for the exam; it was provided as an introduction to seeing reasoning as a process, and as a context for where Gödel fits into the picture. What is examinable? I’m hoping you have an appreciation of how paradoxes can arise from self-reference, and how selfreference and paradoxes are hard to avoid in formal systems (even those designed to avoid them).

NEUR30004,))2017))))

)

)

)

)

)

)

)

)

)))))))))))Peter)Kitchener)

Gödel’s theorem and Turing’s thesis What is NOT examinable: The two proofs (or sketches of the way the proofs work) relating to of the undecibability of the halting problem for Universal Turing Machines, and the very faint sketch of Gödel’s proof are certainly NOT examinable. Students who wish to pursue this further, either because they are interested or they are skeptics (both admirable qualities), can do so, and I hope what I presented will help at the start of what could be a very long journey. What is examinable: You need to appreciate that it has been proven that there are some fundamental limits to what formal systems of logic (including mathematical logic) can do. The limit applies to formal systems of sufficient strength to include arithmetic. The limit is that not all true statements c an be derived by the system, thus the system is said to be incomplete; another way to look at this is that there will be some propositions - written in the correct syntax of the formal language of the system - that can not be derived by the system, and nether can their negation; so the system can’t tell us if it’s true or false – it is thus undecidable. You need to know that this not saying that formal systems are limited because they are inconsistent. You should have an appreciation of what we mean by an algorithm – when, for example, you add two large numbers together (on paper or “in your head”) you are implementing an algorithm that you were taught in primary school. You should have at least an intuitive sense of what recursion is – whether talking about cellular automata or computational methods (ie algorithms). You need to have at least some basic conception of what a Turing machine is - ie. the minimal requirements for mechanical computation. It is noteworthy that the minimum specification of the TM is also in a sense the maximum specification, in that any algorithmic process that can be performed, can be performed on a TM. A universal TM is one that can take as input the instructions for how any particular TM works, thus it can emulate any TM. The halting problem, which shows a limitation of UTMs, is essentially a similar result to Gödel’s theorem, showing that there are computational problems (ie questions that can be encoded as computer programs) for which we are unable to know if they are computable or not. You should think about how this might relate to how we think the brain works (ie thinks). (btw, see in the preceding sentence how easy and trivial self-reference is in normal language.) For example, what are the implications of our current conception of neurons – which are essentially computational units. Is there anything to suggest that such conceptions of neurons are not examples of a Turing Machine? (If you want to argue that they are not TMs, you will need to explain why it is that they can be completely implemented on a computer ie a universal Turning machine!) Cellular automata What is not examinable? There will be no questions about how the cellular automata systems are implemented in a computer program (but it is interesting that we simulate the synchronous parallel updating of the generations on computers that are actually only doing one thing at a time – but then again, it shouldn’t be surprising, we are dealing with UTMs). I will not ask about the History of the development of cellular automata and I wont ask whether Steven Wolfram, who believes the future of science lies with this form of computation, is right or wrong about how big of a deal cellular

NEUR30004,))2017))))

)

)

)

)

)

)

)

)

)))))))))))Peter)Kitchener)

automata are. They are interesting to us because it is a clear demonstration of an incredibly simple system of interacting entities can generate a huge range of phenomena, including the most complex we can imagine (chaotic and random behavior). What is examinable? You should appreciate that cellular automata are the spatial rendition of an algorithm which takes a very simple starting condition and allows the evolution of the forms it generates to be dictated by the form it takes. In this sense, they are analogous to formal systems, whose axioms and rules of inference are very simple, yet vast bodies of mathematical theory can be derived from the repeated (recursive) activity of the system. It is also noteworthy that very, very simple cellular automata (like Wolfram’s rule 30) can generate something as complex as a random pattern. If you don’t think generating an infinite random sequence is difficult, explain how a formal system that included standard axioms of arithmetic (and all the mathematics that flowed from it) could generate an infinite random number? (This last question is not examinable.) Artificial Neural Networks What is NOT examinable: Any and all of the mathematics or Computer science-specific concepts relating to how ANNs work is not examinable. What is examinable: In terms of how ANNs work it is sufficient to know that the back-propagation algorithm works by making tiny adjustments in the direction that would lower the error between the input and the desired output. A very crucial development was the development of back-propagation algorithms that can propagate meaningfully through lots of layers of hidden units. It is also important, from a computational point of view, that all the mathematics (vast numbers of differential equations) that must be solved for all nodes includes lots of re-usable results, and the computation scales with the size of the network (not exponentially with the size of the network, which would significantly limit their utility). You should know, in very broad terms, why ANNs so much better now than they were in the 80s and 90s (ie in what sense are they “deep”?). You should reflect on how they learn and what input they learn with (ie how are they trained) and what are the essential starting configurations? You need to have at least an inkling of what sort of thing deep ANNs are good at (and how good). It is reasonable to ask in what ways are artificial neural networks like biological neural networks. It should also have crossed your mind that if the bright future of machine learning through deep artificial neural networks comes to fruition in the way many commentators are predicting, and if these “pattern detectors” become the means by which we seek to understand complex systems, will their limitations limit what we can discover? Conversely, will we be insufficiently “computationally complex” to understand the patterns that DANNs find in complex phenomena? And this takes us up to the topic of insight? What is insight? As a means of generating knowledge, how similar or different is it to algorithmic processes? Can we compare this to what we know about formal systems (and Gödel’s unprovable truths)? What’s the take-home message? By far the most significant consideration is to what extant are the processes exemplified by formal systems, UTMs, cellular automata (ie simple processes that can iteratively and recursively apply there simple rules to generate ordered and disordered output of staggering complexity) is sufficient

NEUR30004,))2017))))

)

)

)

)

)

)

)

)

)))))))))))Peter)Kitchener)

to be a model (even of only a very loose metaphor) for how brains work, or are there things that brains do that go beyond the (admittedly impressive) capacities of these computational paradigms? There are aspects of what these systems do that look a lot like processes we see in nervous systems – for example, artificial neural networks learn by adjusting the strengths of connections between nodes; these nodes summate inputs and transmit an output divergently, via adjustable connections, to other nodes. Also, once they learn they are robust, somewhat redundant, and insensitive to localized damage, but are not good at learning new, different tasks (they have a onetime critical period). Observations like the configuration of the ferret auditory cortex to visual input suggests the learning process (how to organize connectivity in response to input structure) is a function of the input not in the inherent (genetically encoded) properties of the cortical regions. In my opinion, the thing that human reasoning entails which is not readily explicable as a known computational process, is what we call insight. There is very little research into insight, but it is generally taken to involve something other than searching through the established pathways of thought in our minds and allowing us to find a solution by stepping outside of that “search’ and seeing if there is a similar pattern or scenario elsewhere in our conceptual library that might fit the problem. This could be said to be similar to the undecidable propositions in formal systems – things that we can see to be true but can’t be generated by (“proved” by) the formal system that reasons – mechanically – from established axioms by defined rules. In the MUI game you need the insight that “I can never generate three “I”s.” (i.e. we can never get MIII and thus can’t get MU). How do we do this? I think at some level we start to see the three “I”s as “3” and see that replacing all instances of “III” and not leaving any remainders, is the same thing as “divisible by 3”. When it dawns on us that we are never going to get a number of “I”s that is a factor of 3, we realize that MU is not possible to generate in the MUI formal system. This “jumping out of the system” (as Hofstadter describes it in “Gödel, Escher, Bach”) feels a lot like what we call insight – we stop searching (via applying the rules of the system) and taking a more global view (which, of course, formal system can’t – just like they can’t “look for the missing numbers” in the tp- system to find primes) and this provided, eventually, a sudden realization of the answer, and that is associated with a feeling of certainty (an insight). There is some compelling but still somewhat conjectural research on the mechanism of insight, and it is at least consistent with it being associated with binding activity between cortical regions. And it is this point that I think ties a lot of things together – how can it be that we can somehow synchronize the concepts we are considering in one domain of thought with another domain? We do it all the time – our language, and our thoughts, rely so very deeply on metaphor and abstraction: we are always seeking to understand phenomena by comparing it to other things we know about. Because our brain (esp. our cerebrum) is functionally modular, different modules must be able to talk to each other, to get on the same wavelength (to use a metaphor). It has been said that human mind has “promiscuous interfaces”, which is just another way of saying different modules or modalities seem to be able to share notes. This might be why we see so much topography in the representations in the cortex – certainly we see this in sensory systems and don’t think it has particular functional importance beyond that sensory system, but we also see it in the mapping of purely abstract concepts, like the spectrum of number representations in an ordinal fashion in parietal cortex; and there is evidence that semantic knowledge is ordered into conceptual dimensions that are also physical mappings in the cortex (yet another fascinating topic I didn’t get time to talk about, but Erica showed some aspects of this when looking at object representation in the ventral stream). These mappings may facilitate promiscuous interfacing, and hence the capacity for metaphor and even for insight. And that might be something that only brains can do. Currently. Best wishes, Peter K...


Similar Free PDFs