Whole Brain Emulation
Upload, v.i., v.t. To become a figment of your computer's imagination.
Whole Brain Emulation (Colloquially mind uploading) is the hypothethical process of scanning a brain, creating an abstract, computable model of it (For example, a neural network) and running the emulation. There is no guarantee that an emulation of a brain will produce a mind. It is important to transhumanism because of the radical potential for human enhancement, but it is often discussed in the context of life extension.
- 1 Overview
- 2 History
- 3 Science & Technology
- 3.1 Process
- 3.2 Slicing
- 3.3 Scanning
- 3.3.1 Optical Procedures
- 3.3.2 MRI Microscopy
- 3.3.3 X-Ray Microscopy
- 3.3.4 Atomic Beam Microscopy
- 3.3.5 Electron Microscopy
- 3.3.6 Nondestructive Procedures
- 3.3.7 Moravec Procedure
- 3.3.8 Summary
- 3.4 Simulation Hardware
- 3.5 Simulation Software
- 3.5.1 Levels of Abstraction
- 3.5.2 Analysis
- 3.5.3 Models of Neurons
- 3.5.4 Existing Simulators
- 3.6 Complications
- 3.7 Sensory/Motor Neuron Map
- 3.8 Map of Technological Capabilities
- 4 Timeline
- 5 Philosophy
- 6 Effects
- 7 Human Enhancement
- 8 Projects
- 9 Large-Scale Emulations So Far
- 10 Misconceptions
- 11 Criticism
- 12 Roadmap
- 13 Books
- 14 Resources
- 15 In Popular Culture
- 16 Glossary
- 17 People
- 18 See Also
- 19 References
The human brain occupies a volume of 1.5 liters. It has 85 billion neurons. It has 500 trillion synapses. For every neuron there are ten neuroglia, which provide structural and chemical support. Neuroglia may turn out to be necessary for a complete brain emulation, but are not necessary to imitate the phenomenology of spiking.
Each neuron has a soma, an spherical cell body that holds the organelles, an axon hillock from where the axon sprouts and begins branching. Action potentials (Signals) travel down the axon. Dendrites are the input side of neurons, there are many and they grow directly from the soma. Axons and dendrites (Or sometimes axons and cell bodies) intercept either at the terminal of the axon or at some point in its length, forming synaptic terminals or synapses en passant, respectively.
These structures, grown on top of the other, form the many regions of the brain, which have somewhat discrete properties that form higher-level cognition.
There are many theories of mind, most of which are actually hypotheses of mind. The currently accepted theory is the theory that the mind is an emergent property of lower-level interactions, in the same way that a computer program is an emergent property of abstraction after abstraction above the machine code.
Note that all analogies between the computer and the brain are bound to fail. As John Searle said,
Because we do not understand the brain very well we are constantly tempted to use the latest technology as a model for trying to understand it. In my childhood we were always assured that the brain was a telephone switchboard. ('What else could it be?') I was amused to see that Sherrington, the great British neuroscientist, thought that the brain worked like a telegraph system. Freud often compared the brain to hydraulic and electro-magnetic systems. Leibniz compared it to a mill, and I am told some of the ancient Greeks thought the brain functions like a catapult. At present, obviously, the metaphor is the digital computer.
- Minds, Brains and Science, p.44
The brain is not a computer. The brain is the brain. Let this be a precedent for the rest of the article.
The origin of WBE can be traced to J.D. Bernal's 1929 book The World, The Flesh, The Devil, although the idea is more about gradual replacement as opposed to the traditional slice-and-dice method. The following quote was previously reproducing in the WBE Roadmap:
Men will not be content to manufacture life: they will want to improve on it. For one material out of which nature has been forced to make life, man will have a thousand; living and organized material will be as much at the call of the mechanized or compound man as metals are to‐day, and gradually this living material will come to substitute more and more for such inferior functions of the brain as memory, reflex actions, etc., in the compound man himself; for bodies at this time would be left far behind. The brain itself would become more and more separated into different groups of cells or individual cells with complicated connections, and probably occupying considerable space. This would mean loss of motility which would not be a disadvantage owing to the extension of the sense faculties. Every part would not be accessible for replacing or repairing and this would in itself ensure a practical eternity of existence, for even the replacement of a previously organic brain‐cell by a synthetic apparatus would not destroy the continuity of consciousness.
... Finally, consciousness itself may end or vanish in a humanity that has become completely etherealized, losing the close‐knit organism, becoming masses of atoms in space communicating by radiation, and ultimately perhaps resolving itself entirely into light.
The first real paper on WBE was Merkle's 1989 paper Large Scale Analysis of Neural Structures, in which he predicted that "a complete analysis of the cellular connectivity of a structure as large as the human brain is only a few decades away".
Science & Technology
- The brain is extracted. Necessary extra fixatives are applied.
- The brain is segmented into as many pieces as needed.
- Each piece is laminated and scanned by an ultramicrotome, ATLUM or some other machine.
- Current technology is rather slow, but new massively-parallel electron microscopes are being, developed, primarily by the semiconductor industry. The speed of scanning depends on how many machines you have.
- A stack of electron micrographs is built, one for every slice.
- Noise is eliminated, different electron micrographs at the same height are pasted together by inferring edge connectivity.
- An edge detector traces the contours of neurites and cellular structures.
- Other algorithms detect intracellular structures of interest (Polyribosome complexes).
- This is done for every layer.
- Another algorithm joins the edges in different layers, creating a 3D model of the brain.
- Another algorithm uses that model to create a graph of the connectivity of the brain. Each node in this graph is a neuron, and the neuron data structure is supplied with all the necessary extra information acquired by step 7.
- After the scan is complete, the graph is stored in a neuromorphic computer; a machine where every processor is a hardware implementation of some model of neurons.
- The graph at the lowest level of the stem is joined with a species-generic graph of the connectivity of the spine, ie: Axons of the brain are matched to virtual nerve endings.
- The simulation of the body and brain, either connected to a robot or virtual avatar, is started.
The Automatic Tape-Collecting Lathe Ultramicrotome is currently the fastest way to fully automatically section and scan neural tissue.
Invented by transhumanist Ken Hayworth, the ATLUM is essentially a rotating block of tissue (typically 1 to 2mm in width, 10mm in length, and 0.5mm in depth) mounted on a steel axle. As the tissue is turned by an ultraprecise bearing, the microtome (An ultrathin knife) advances by means of a piezoelectric component, creating a spiral cut of 40nm-thikc tissue, which slides onto the water on the back surface of the microtome. The slice is then sandwiched between two Carbon-coated mylar tapes, replacing the need for dexterous undergraduates to manually position the sections on a TEM grid. The tape has the extra benefits of being a sturdy substrate and preventing charging and beam damage during the scan. The position of the knife relative to the axle can be held to within approximately 10nm, so that many square millimeters of tissue can be sectioned.
After sectioning, the tissue can be preserved for later imaging, or exposed to heavy metals for SEM. This produces greater signal to noise ratios and much faster imaging time that what is provided by SBFSEM and FIBSEM. Imaging can also be done through the SEM backscattering signal, like in SBFSEM and FIBSEM, allowing the ATLUM to work without the need for a TEM grid and disposes of width-of-section limitations.
Images obtained with an ATLUM are of a quality equivalent to that of traditional TEM, with a lateral resolution above 5nm. ATLUM-collected sections can be subject to tomographic tilt (See SSET), giving aditional depth data. This is done by tilting the section at various angles relative to the SEM beam, giving depth resolutions greater than that of the thickness itself.
One of the main differences between the ATLUM and other microtomes is that others engage in discontinuous motion — That is, sweep forward, collect a slice into the water boat, sweep back, repeat. The ATLUM generates a continuous cut by moving in a spiral manner around the sample. There may be some problems reconstructing large volumes from EMs of slices that are flat but must be mapped to tubular surfaces of decreasing diameter (The brain), a problem that is avoided by having straight, one-above-the-other cuts. Another point, related to the above, is that the ATLUM produces a single tape of tissue, while others produce independent slices that must be gather and set in an orderly manner. The ATLUM does not have this problem. In an ordinary ultramicrotome, an accident that randomizes the slices will ruin the entire process. In the ATLUM, one only needs to distinguish one end from the other.
With an ATLUM, one can scan volumes of brain tissue in the range of cubic millimeters, in a completely automated manner: Unlike SSET and SSTEM, which are only semi-automated: The operator must manually recover tissue from the knife's water boat and, again, manually place it into TEM slot grids using an eyelash, a terribly inefficient and unreliable process.
The Knife-Edge Scaning Microscope is a machine that integrates sectioning and imaging into a single process, with the diamond knife and microscope moving together in the same assembly, and can scan large volumes of tissue at high resolution (Below SBFSEM). It would allow for an entire mouse brain (≈1 cm3) to be scanned in one hundred hours, producing 15 Terabytes of raw EM data.
The main limitation is the need to stain inside a volume. The array of stains is at present limited, but genetic modifications may allow for cells to express stains or become fluorescent.
The tissue is fixed and embedded in plastic. As the knife and the microscope move over the tissue, a thin strip of tissue is cut and scanned. The KESM can scan up to 200 megapixels per second, with the width of the EM field being 2.5mm for 64μm resolution and 0.625mm for 32μm resolution. The sample is cut in a stair-step manner to reduce jittering and use knives larger than the microscope FOV.
(Ordered by increasing resolution)
Optical microscopy methods are limited by the need for staining tissues to make relevant details stand out and the diffraction limits set by the wavelength of light (≈0.2 μm). The main benefit is that they go well together with various spectrometric methods (see below) for determining the composition of tissues.
Sub‐diffraction optical microscopy is possible, if limited. Various fluorescence‐based methods have been developed that could be applicable if fluorophores could be attached to the brain tissue in a way that provided the relevant information. Structured illumination techniques use patterned illumination and post‐collection analysis of the interference fringes between the illumination and sample image together with optical nonlinearities to break the diffraction limit. This way, 50 nm resolving power can be achieved in a wide field, at the price of photodamage due to the high power levels (Gustafsson, 2005). Near‐field scanning optical microscopy (NSOM) uses a multinanometre optic fiber to scan the substrate using near‐field optics, gaining resolution (down to the multi‐nanometer scale) and freedom from using fluorescent markers at the expense of speed and depth of field. It can also be extended into near field spectroscopy.
Confocal microscopy suffers from having to scan through the entire region of interest and quality degrades away from the focal plane. Using inverse scattering methods depth‐ independent focus can be achieved (Ralston, Marks et al., 2007).
All‐optical histology uses femtosecond laser pulses to ablate tissue samples, avoiding the need for mechanical removal of the surface layer (Tsai, Friedman et al., 2003). This treatment appears to change the tissue 2‐10 μm from the surface. However, Tsai et al. were optimistic about being able to scan a fluorescence labelled entire mouse brain into 2 terapixels at the diffraction limit of spatial resolution.
Another interesting application of femtosecond laser pulses is microdissection (Sakakura, Kajiyama et al., 2007; Colombelli, Grill et al., 2004). The laser was able to remove 100 μm samples from plant and animal material, modifying a ~10 μm border. This form of optical dissection might be an important complement for EM methods, in that, after scanning the geometry of the tissue at a high resolution, relevant pieces can be removed and analyzed microchemically. This could enable gaining both the EM connectivity data and detailed biochemistry information. Platforms already exist that can both inject biomolecules into individual cells, perform microdissection, isolate and collect individual cells using laser catapulting, and set up complex optical force patterns (Stuhrmann, Jahnke et al., 2006).
X-ray microscopy also allows spectromicroscopy, which adds additional information to the scan about the chemical environment of the tissue. Different aminoacids can be detected with this method and individual proteins could be classified.
Currently, X-ray microscopy is too slow to be relevant for WBE of mammals: Scanning X-ray microscopes have exposure times measured in minutes, although they deposit five to ten times less radiation in the sample.
Atomic Beam Microscopy
Atomic-beam microscopy consists of using a beam of neutral atoms, instead of electrons of photons, to image tissue. The de Broglie wavelength of thermal atoms is in the subnanometer range, making the resolution match that of the best electron microscopes. If uncharged, inert atoms like Helium are used, the beam would not destroy tissue even at such a resolution. Moreover, Helium atom scattering has a large cross-section with Hydrogen, which might make it possible to detect membranes even in unstained tissue.
High resolution atomic beam microscopy has not been achieved, although low resolution has been. Recent developments have enabled focusing neutral atom beams to a spot size of tens of nanometers, which could be scanned across the tissue to construct the full image.
By tilting the sample relative to the electron beam, the TEM can detect depth and create high-resolution 3D images. Due to the limitations on depth (1µm), it is useful mostly for scanning of 'local' tissue samples, ie the organelles and cellular structures of small volumes of tissue.
Serial Section Transmission Electron Microscopy
By making ultrathin slices, a three-dimensional model can be made. This method has been used to build a model of a neuromuscular juncture (50nm-thick sections) and to construct the connectome of the C. Elegans. However, this process is labor intensive unless it can be automated.
One way of reducing the problems of sectioning is to place the microtome inside the microscope chamber (Leighton, 1981)
for further contrast, plasma etching was used (Kuzirian and Leighton, 1983)
(Denk and Horstmann, 2004) demonstrated that backscattering contrast could be used instead in a SEM, simplifying the technique.
produced stacks of 50‐70 nm thick sections using an automated microtome in the microscope chamber, with lateral jitter less than 10 nm. The resolution and field size was limited by the commercially available system. They estimated that tracing of axons with 20 nm resolution and S/N ratio of about 10 within a 200 μm cube could take about a day (while 10 nm x 10 nm x 50 nm voxels at S/N 100 would require a scan time on the order of a year).
Reconstructing volumes from ultrathin sections faces many practical challenges. Current electron microscopes cannot handle sections wider than 1‐2 mm. Long series of sections are needed but the risk of errors or damage increase with the length, and the number of specimen holding grids becomes excessive (unless sectioning occurs inside the microscope (Kuzirian and Leighton, 1983)). Current state of the art for practical reconstruction from tissue blocks is about 0.1 mm3 , containing about 107‐108 synapses (Fiala, 2002).
The semiconductor industry has long used focused ion beams to perform failure analysis tests on integrated circuits. FEI researchers have shown that this can be used to image plastinated neural tissue.
An ion beam ablates the top 30 to 50 nanometers of a 100x100μm tissue sample. The backscatter is imaged by the SEM, and the process is then repeated. It is similar to SBFSEM, but without the problems caused by high beam current.
Increasing Speed of SEM
From the above discussion it is clear that long imaging times constitute a major barrier to whole brain emulation using SEM techniques. However, there is currently a major research push toward massively parallel multi‐beam SEMs which has the potential to speed up SEM imaging by many orders‐of‐magnitude. This research push is being driven by the semiconductor industry as part of its effort to reduce feature sizes on computer chips below the level that traditional photolithography can produce.
The circuitry patterns within computer chips are produced through a series of etching and doping steps. Each of these steps must affect only selected parts of the chip, so areas to be left unaffected are temporally covered by a thin layer of polymer which is patterned in exquisite detail to match the sub‐micron features of the desired circuitry. For current mass production of chips this polymer layer is patterned by shining ultraviolet light through a mask onto the surface of the silicon wafer which has been covered with the photopolymer in liquid form. This selectively cures only the desired parts of the photopolymer. To obtain smaller features than UV light can allow, electron beams (just as in a SEM) must instead be used to selectively cure the photopolymer. This process is called e‐beam lithography. Because the electron beam must be rastered across the wafer surface (instead of flood illuminating it as in light lithography) the process is currently much too slow for production level runs.
Several research groups and companies are currently addressing this speed problem by developing multi‐beam e‐beam lithography systems (Kruit, 1998; van Bruggen, van Someren et al., 2005; van Someren, van Bruggen et al., 2006; Arradiance Inc). In these systems, hundreds to thousands of electron beams raster across a wafer’s surface simultaneously writing the circuitry patterns. These multi‐beam systems are essentially SEMs, and it should be a straightforward task to modify them to allow massively parallel scanning as well
(Pickard, Groves et al., 2003). For backscatter imaging (as in the SBFSEM, FIBSEM, and ATLUM technologies) this might involve mounting a scintillator with a grid of holes (one for each e‐beam) very close to the surface of the tissue being imaged. In this way the interactions of each e‐beam with the tissue can be read off independently and simultaneously.
It is difficult to predict how fast these SEMs may eventually get. A 1,000 beam SEM where each individual beam maintains the current 1 MHz acquisition rate for stained sections appears reachable within the next ten years. We can very tentatively apply this projected SEM speedup to ask how long imaging a human brain would take. First, assume a brain were sliced into 50nm sections on ATLUM‐like devices (an enormous feat which would itself take approximately 1,000 machines – each operating at 10x the current sectioning rate – a total of 3.5 years to accomplish). This massive ultrathin section library would contain the equivalent of 1.1∙1021 voxels (at 5×5×50 nm per voxel). Assuming judicious use of directed imaging within this ultrathin section library only 1/10 may have to be imaged at this extremely high resolution (using much lower, and thus faster, imaging on white mater tracts, cell body interiors etc.). This leaves roughly 1.1∙1020 voxels to be imaged at high resolution. If 1,000 SEMs each utilizing 1,000 beamlets were to tackle this imaging job in parallel their combined data acquisition rate would be 1∙1012 voxels per second. At this rate the entire imaging task could be completed in less than 4 years.
Main article: Non-destructive uploading
(This totally ruins the otherwise 100% science contents of the article, so please ignore it)
Scanning of the neural structures may take the form of gradual replacement, in which a robot surgeon equipped with a manipulator that subdivies into increasingly smaller branches. While the patient is awake and conscious, this manipulator begins removing cells, clamping blood vessels, exposing synapses for analysis, etc. Once the onboard computing has a good picture of what's going on, it creates a simulation of the specific volume of the brain and replaces the part of the brain with the hardware running this simulation, using magic nanofingers to plug everything together. After a while, the entire brain is composed of this hardware, maintaining the same functionality as before. REF MORAVEC 1988. While this may help mitigate fears of loss of consciousness in an all-in-one 'kill,cut,scan' approach, it is technically infeasible as the system has to be both seamlessly integrated with living, changing, moving biological tissue.
It is sometimes suggested that the successive aggregation of brain-computer interfaces to a brain will lead to a state where transfer is possible, by reaching a state where most functions are carried out in the external hardware and the brain is no longer necessary, or by reaching a point where the systems are so pervasive that it is possible to scan the whole of the brain with them, destructively or otherwise.
All forms of Electron Microscopy, except SBFSEM have sufficient resolution to construct a graph of the connectivity of the brain and also inspect the properties of individual synapses (Count the synaptic vesicles). It may be possible, with future modifications, for SBFSEM to reach the necessary resolution.
|MRI||> 5.5 µm (Non-frozen)||Does not require sectioning, may achieve better resolution on vitrified brains.|
|MRI microscopy||3 µm||None|
|NIR microspectroscopy||1 µm||None|
|All‐optical histology||0.7 µm||None|
|KESM||0.3 µm x 0.5 µm||None|
|X‐ray microtomography||0.47 µm||None|
|X‐ray microscopy||30 nm||Spectromicroscopy possible?|
|SBFSEM||50-70 nm x 1-20 nm||None|
|FIBSEM||30-50 nm x 1-20 nm||None|
|ATLUM||40 nm x 5 nm||None|
|SSET||50 nm x 1 nm||None|
|Atomic beam microscopy||10 nm||Not implemented yet|
|NSOM||5 nm?||Requires fluorescent markers, spectroscopy possible.|
|Array tomography||1-20 nm SEM, 50x200x200 nm fluorescence stains||Enables multiple staining|
|TEM||<1 nm||Basic 2D method, must be combined with sectioning or tomography for 3D imaging. Damage from high energy electrons at high resolutions.|
|Level||Description||# entities||Bytes per entity||Memory demands (Terabytes)||Earliest year at a cost of $1 million|
|2||Brain region connectivity||105 regions, 107 connections||2? (2-byte connectivity, 1 byte weight)||3x105||Present|
|3||Analog network population model||108 populations, 1013 connections.||5 (3-byte connectivity, 1 byte weight, 1 byte extra state variable)||50||Present|
|4||Spiking neural network||1011 neurons, 1015.||8 (4-byte connectivity, 4 state variables)||8,000||2019|
|5||Electrophysiology||1015 compartments x 10 state variables = 1036||1 byte per state variable||10,000||2019|
|6||Metabolome||1036 compartments x 102 metabolites = 1018.||1 byte per state variable||106||2029|
|7||Proteome||1036 compartments x 103 proteins and metabolites||1 byte per state variable||107||2034|
|8||States of protein complexes||1036 compartments x 103 proteins x 10 states = 1020.||1 byte per state variable||108||2038|
|9||Distribution of complexes||1036 compartments x 103 proteins and metabolites x 100 states/locations.||1 byte per state variable||109||2043|
|None||Full 3D EM map||50x2.5x2.5 nm||1 byte per voxel, compressed.||109||2043|
|10||Molecular dynamics||1025 molecules.||31 (2 bytes molecule type, 14 bytes position, 14 bytes velocity, 1 byte state). (Does that last byte refer to energy state? How do reactions happen anyways? You would need special considerations because most MD don't actually allow reactions --Eudoxia 14:55, 28 January 2012 (UTC))||3.1x1014||2069|
|11||Quantum chemistry||Either ~1026 atoms, or smaller number of quantum-state carrying molecules||Qbits||?||?|
A neuromorphic chip is a computer built to mimic specific architectures present in biological brains.
- SpiNNaker is a massively parallel, low power, neuromorphic supercomputer
- Manchester University, UK
- lead by Professor Steve Furber
- collaborators from the universities of Southampton, Cambridge, and Sheffield
- model very large, biologically realistic, spiking neural networks in real time
- The machine will consist of 65,536 identical custom-built 18-core processors, giving it 1,179,648 cores in total
- Each processor has an on-board router to form links with six neighbours, forming a toroidal network, as well as its own 128 MB of memory to hold synaptic weights. One processors contains 18 identical cores clocked at 200 MHz. A core is an ARM968 processor core manufactured using a 130 nm process, 32 kB of instruction memory, 64 kB of data memory, three controllers, a clock, and a timer. although ARM968 is old, it is used because the licensing agreement was committed to back in 2005. Each multiprocessor chip has about 100 million transistors, most of which are in the 55 blocks of 32 kB SRAM local instruction and data memory.
- On a separate die, but within the same chip package, is a 128 MB DDR SDRAM memory chip that operates at up to 166 MHz. This has about a billion transistors. The multiprocessor and memory chips are packaged together, one above the other, in a 19x19mm 300-pin ball grid array.
- Each core dissipates 1 Watt of energy. The SpiNNaker machine is expected to consume 50-100 kW peak, although the average is predicted to be well below 50 kW. For comparison, the average human brain consumes around 20 W.
- The finished million-processor machine will occupy several cabinets. At least six to eight, possibly more if the power density turns out to be an issue.
- A possible configuration would be: 48 chips per board, 12 boards per rack, 20 racks per cabinet, 6 cabinets. This is a purely speculative configuration dreamt up by this article's author.
- massive parallelism and resilience to failure of individual components. With over one million cores, and one thousand simulated neurons per core, the machine will be capable of simulating one billion neurons. This equates to just over 1% of the human brain's 86 billion neurons.
- SpiNNaker will be a platform on which different algorithms can be tested
- many connectivities can be made
- SpiNNaker is a contrived acronym derived from Spiking Neural Network Architecture.
- The project started in 2005 and is currently funded by a UK government grant until early 2014. The microchips were manufactured and delivered to the lab in June 2011. A prototype with 864 cores was built in mid-2012. The full machine with over 1 million cores is expected to be complete by the end of 2013.
- The SpiNNaker machine, when complete by the end of 2013, will be able to simulate around 1 billion neurons. This equates to just over 1% of the human brain's 86 billion neurons. If the system proves successful then similar machines can be built to take advantage of more advanced processors. For example, the 130 nm process used for the SpiNNaker chips is over a decade old - this process was used for the consumer processors that went on sale starting in 2001. If a more modern process were used, for example 22 nm as used in 2012's consumer-level devices, then power consumption could be reduced by a factor of 10.
- Low-Power Chips to Model a Billion Neurons
- Professor Steve Furber giving a talk at Edinburgh University called 'Building brains'
- Power-efficient simulation of detailed cortical microcircuits on SpiNNaker
The BrainScaleS project aims to understand information processing in the brain at different scales ranging from individual neurons to whole functional brain areas. The research involves three approaches: (1) in vivo biological experimentation; (2) simulation on petascale supercomputers; (3) the construction of neuromorphic processors. The goal is to extract generic theoretical principles of brain function and to use this knowledge to build artificial cognitive systems.
The neuromorphic hardware is based around wafer-scale analog VLSI. Each 20-cm-diameter silicon wafer contains 384 chips, each of which implements 128,000 synapses and up to 512 spiking neurons. This gives a total of around 200,000 neurons and 49 million synapses per wafer. VLSI models operate considerably faster than the biological originals. This allows the emulated neural networks to evolve tens-of-thousands times quicker than real time.
The project is a European consortium of 13 research groups lead by a team at Heidelberg University, Germany. The project started in January 2011 and has funding from the European Union through until the end of 2014.
May 25, 2012 - New video tour of the neuromorphic hardware shows one artificial spiking neuron triggering the firing of a second neuron.
Jan 23, 2012 - The fully-assembled wafer-scale system shows its first spikes by the artificial neurons.
Aug 25, 2011 - Neural network wafers arrive at the lab in Germany, sent from the UMC fabrication plant in Taiwan.
The BrainScaleS hardware is based around wafer-scale integration of neuromorphic processors. The silicon wafers are 20 cm in diameter and contain an array of identical, tightly-connected chips. The circuitry is mixed-signal. That is, it contains a mix of both analog and digital circuits. The simulated neurons themselves are analog, while the synaptic weights and interchip communication is digital.
One wafer is built to contain 48 reticles. Each reticle contains 8 HICANN chips (High Input Count Analog Neural Network). This makes a total of 384 identical chips per wafer. A HICANN chip is 5x10 mm2 in size. Each one contains an ANC (Analog Neural Core) which is the central functional block, plus supporting circuitry. Each HICANN implements 128,000 synapses and 512 membrane circuits. These can be grouped together to form simulated neurons.
The number of neurons per chip depends on how many synapses are configured per neuron. For the maximum of 16,000 pre-synaptic inputs per neuron, 8 neurons are possible per chip. For the maximum of 512 neurons per chip, there can only be 256 synapses per neuron.
Thus, per wafer there is a total of 49,152,000 synapses, or up to 196,608 neurons. This is assuming that every chip on the wafer is flawless and functional, which will not necessarily always be the case.
The wafer is supported on an aluminum plate which also serves as a heat sink. A multi-layer printed circuit board (PCB) is placed on top of the wafer and this serves as the input/output interface to the neural circuitry. Larger systems can be built by interconnecting several wafer modules.
The circuitry implements time-continuous leaky integrate-and-fire neurons with conductance-based synapses. Neural networks can be created with both short-term and long-term plasticity mechanism. Because of the timescales involved in the chip operation, the neural networks can be evolved thousands of times faster than their real time biological counterparts. Altogether, the BrainScaleS architecture shows promise for studying Hebbian learning, STDP, and cortical dynamics.
The neuromorphic hardware was designed at the universities in Heidelberg and Dresden. The fabrication was done by UMC in Taiwan.
Supercomputers are used to perform simulations of large-scale neural networks. The aim is to develop mathematical models of such networks. These models will then be used later to design the neuromorphic hardware.
The simulation work is lead by Professor Markus Diesmann of the Computational Neurophysics group at the Jülich Research Center in the town of Jülich, Germany.
The simulations are run on the JUGENE supercomputer - a Blue Gene/P system installed at Jülich. As of May 2011 this is ranked the 13th fastest supercomputer in the world. It has 294,912 processor cores and a performance of around 1 petaflops.
The simulations are used to test mathematical models of neural circuits. The software used is NEST (NEural Simulation Tool). This simulates networks of point neurons or neurons with a small number of compartments.
Although very large scale networks have been previously investigated, e.g. Izhikevich, the underlying simulation technologies have not been described in sufficient detail to be reproducible by other research groups.
Recent work optimising the memory consumption of NEST showed that a network of 59 million neurons, with 10,000 synapses per neuron, can be distributed over all 294,912 cores of JUGENE. Networks of 100 million neurons and a trillion synapses are also theoretically realizable - either by increasing the number of cores, or reducing the overhead for neurons. This is still about three orders of magnitude away from the human brain, however, which has around 86 billion neurons and 1,000 trillion synapses.
Paper published January 2012: Meeting the memory challenges of brain-scale network simulation
The JUGENE supercomputer is scheduled for decommission for 31 July 2012 and will be replaced by a Blue Gene/Q system called JUQUEEN. It will have 131,072 compute cores and a peak performance of 1.6 petaflops. Each core is an IBM PowerPC A2 running at 1.6 GHz.
BrainScaleS is funded by the European Union. It received €8.5 million initially, plus €700,000 in an extension.
The funding comes from the Brain ICT program, which in turn is part of the Seventh Framework Programme (FP7).
The project is set to run from January 1, 2011 until December 31, 2014.
Levels of Abstraction
The following table depicts the various levels of organization and scale at which the brain can be emulated -- Which increasing accuracy, increasing scanning difficulty, and increasing computational complexity. However, the amount of undestanding required is reduced the further we go: The first level of abstraction requires complete, high-level understanding of the mind. Not the brain, but the abstract processes of cognition. The lowest level can be done with off-the-shelf software (If you have a couple thousand dollars to buy a copy of Gaussian), but requires hardware beyond Greg Egan's imagination and scanning at the level of single atoms.
|1||Abstract model of the mind||"Classic AI", high level representations of information and information processing. Built from the top down knowing the person's personality, ideas, etc., or using species-generic characteristics.|
|2||Brain region connectivity||Each area represents a functional module, connected to others according to an abstract, species-generic connectome.|
|3||Analog network population model||The population of neurons and their connectivity. Activity and states of neurons are represented as time-averages. This is similar to connectionist models using Artificial Neural Networs, rate-model neural simulations and cascade models.|
|4||Spiking neural network||As above, plus firing properties, firing state and dynamical synaptic states. Integrate and fire models, reduce compartment models (But also some minicolumn models, eg, and the Izhikevich model.|
|5||Electrophysiology||As above, plus membrane sates (Ion channel types, properties, distribution, states), ion concenrtations, currents, voltages, modulation states. Multi-compartment model simulations only.|
|6||Metabolome||As above, plus concentrations of metabolites and neurotransmitters in compartments.|
|7||Proteome||As above, plus concentrations of proteins and gene expression levels.|
|8||States of protein complexes||As above, plus protein quaternary structures.|
|9||Distribution of complexes||As above, plus "locome" information and internal cellular geometry.|
|10||Molecular dynamics||As above, plus molecular coordinates and molecular-level scanning.|
|11||Quantum chemistry||Quantum interactions, orbitals. Requires a complete .pdb of the entire brain, besides being completely computationally intractable.|
Various methods are being developed to automatically correct the image stack so that they match best. The simplest method is finding a combination of translation, scaling and rotation that works best. However, this runs the risk of over-matching.
Human brain and rat stacks have been corrected with good results using an elastic model to correct distortion by .
Noise removal is one of the oldest problems in the image processing side of computer science and thus has extensive literature and a strong research interest. In the case of brain scanning, the kind of noise imparted by the scanning method or by the brain itself are known, which makes removal easier. See ' for an example of light variations and knife chatter removed from KESM data.
Scanning a volume as large as the brain is likely to produce large volumes where data is lost (For example, the KESM suffers data up to 5 µm in width between different columns. In a sufficiently small case, surrounding data may be used to probabilistically interpolate the brain structure in the lost areas. In a large enough volume, however, interpolation is not sufficient, and one must generate a brain structure to fill the lost volume, using knowledge of the surrounding structure (For example, all neurons in the cerebral cortex are pyramidal, so if there is a lost volume in the cortex, stellate cells cannot be inserted.
This is a high priority issue, since lost or poorly interpolated data may case mis tracing and an inexact emulation.
Automated tracing of neurons imaged using confocal microscopy has been attempted using a variety of methods. Even if the scanning method used will be a different approach it seems likely that knowledge gained from these reconstruction methods will be useful. One approach is to enhance edges and find the optimal joining of edge pixels/voxels to detect contours of objects. Another is skeletonization. For example, (Urban, O’Malley et al., 2006) thresholded neuron images (after image processing to remove noise and artefacts), extracting the medial axis tree. (Dima, Scholz et al., 2002) employed a 3D wavelet transform to perform a multiscale validation of dendrite boundaries, in turn producing an estimate of a skeleton. A third approach is exploratory algorithms, where the algorithm starts at a point and uses image coherency to trace the cell from there. This avoids having to process all voxels, but risks losing parts of the neuron if the images are degraded or unclear. (Al‐Kofahi, Lasek et al., 2002) use directional kernels acting on the intensity data to follow cylindrical objects. (Mayerich and Keyser, 2008) use a similar method for KESM data, accelerating the kernel calculation by using graphics hardware. (Uehara, Colbert et al., 2004) calculates the probability of each voxel belonging to a cylindrical structure, and then propagates dendrite paths through it. One weakness of these methods is that they assume cylindrical shapes of dendrites and the lack of adjoining structures (such as dendritic spines). By using support‐vector machines that are trained on real data a more robust reconstruction can be achieved (Santamaría‐Pang, Bildea et al., 2006). Overall, tracing of branching tubular structures is a major interest in medical computing. A survey of vessel extraction techniques listed 14 major approaches, with several examples of each (Kirbas and Quek, 2004). The success of different methods is modality‐dependent.}}
In electron micrographs, synapses are currently recognized using the criteria that within a structure there are synaptic vesicles adjacent to a presynaptic density, a synaptic density with electron‐dense material in the cleft and densities on the cytoplasmic faces in the pre‐ and postsynaptic membranes (Colonnier, 1981; Peters and Palay, 1996).
One of the major unresolved issues for WBE is whether it is possible to identify the functional characteristics of synapses, in particular synaptic strength and neurotransmitter content, from their morphology.
In general, cortical synapses tend to be either asymmetrical “type I” synapses (75‐95%) or symmetrical “type II” synapses (5‐25%), based on having a prominent or thin postsynaptic density. Type II synapses appear to be inhibitory, while type I synapses are mainly excitatory (but there are exceptions) (Peters and Palay, 1996). This allows at least some inference of function from morphology.
The shape and type of vesicles may also provide clues about function. Small, clear vesicles appear to mainly contain small‐molecule neurotransmitters; large vesicles (60 nm diameter) with dense cores appear to contain noradrenaline, dopamine or 5‐HT; and large vesicles (up to 100 nm) with 50‐70 nm dense cores contain neuropeptides (Hokfelt, Broberger et al., 2000; Salio, Lossi et al., 2006). Unfortunately there does not appear to be any further distinctiveness of vesicle morphology to signal neurotransmitter type.
Cell Type Identification
Distinguishing neurons from glia and identifying their functional type requires other advances in image recognition.
The definition of neuron types is debated, as well as the number of types. There might be as many as 10,000 types, generated through an interplay of genetic, posttranscriptional, epigenetic, and environmental interactions (Muotri and Gage, 2006). There are some 30+ named neuron types, mostly categorized based on chemistry and morphology (e.g. shape, the presence of synaptic spines, whether they target somata or distal dendrites). Distinguishing morphologically different groups appear feasible using geometrical analysis (Jelinek and Fernandez, 1998).
In terms of electrophysiology, excitatory neurons are typically classified into regular‐spiking, intrinsic bursting, and chattering, while interneurons are classified into fast‐spiking, burst spiking, late‐spiking and regular spiking. However, alternate classifications exist. (Gupta, Wang et al., 2000) examined neocortical inhibitory neurons and found three different kinds of GABAergic synapses, three main electrophysiological classes divided into eight subclasses, and five anatomical classes, producing 15+ observed combinations. Examining the subgroup of somatostatin‐expressing inhibitory neurons produced three distinct groups in terms of layer location and electrophysiology (Ma, Hu et al., 2006) with apparently different functions.
- The morphology and electrophysiology of inhibitory neurons in the 2nd and 3rd layers of trhe prefrontal cortex also indicates the existence of different clustered types.
Overall, it appears that there exist distinct classes of neurons in terms of neurotransmitter, neuropeptide expression, protein expression (e.g. calcium binding proteins), and overall electrophysiological behaviour. Morphology often shows clustering, but there may exist intermediate forms. Similarly, details of electrophysiology may show overlap between classes, but have different population means.
Some functional neuron types are readily distinguished from morphology (such as the five types of the cerebellar cortex). A key problem is that while differing morphologies likely implies differing functional properties, the reverse may not be true. Some classes of neurons appear to show a strong link between electrophysiology and morphology (Krimer, Zaitsev et al., 2005) that would enable inference of at least functional type just from geometry. In the case of layer 5 pyramidal cells, some studies have found a link between morphology and firing pattern (Kasper, Larkman et al., 1994; Mason and Larkman, 1990), while others have not (Chang and Luebke, 2007). It is quite possible that different classes are differently identifiable, and that the morphology‐function link could vary between species.
- Unique and identifiable neurons are relative common in small animals become less and less common as brain size increases
- Identifiable neurons are present in small animals
- They can be distuinguished from other neurons inside the individual or across individuals(Bullock, 2000)
Models of Neurons
A model of a neuron is an abstract mathematical model that seems to imitate some aspect of the behavior of neurons. They are used to predict the outcome of biological processes and to study the nervous system in a more flexible environment of study (A computer).
|Costs of Neuron Models|
|Model||# of biological features||FLOPS/ms|
|Integrate‐and‐fire with adapt.||5||10|
Note: Only the Morris‐Lecar and Hodgkin‐Huxley models are "biophysically meaningful" in the sense that they attempt actually to model real biophysics, the others only aim for a correct phenomenology of spiking.
The primary software used by the BBP for neural simulations is a package called NEURON. This was developed starting in the 1990s by Michael Hines at Yale University and John Moore at Duke University. It is written in C, C++, and FORTRAN. The software continues to be under active development and, as of July 2012, is currently at version 7.2. It is free and open source software, both the code and the binaries are freely available on the website. Michael Hines and the BBP team collaborated in 2005 to port the package to the massively parallel Blue Gene supercomputer.
The NEURON Simulation Environment (aka NEURON --see http://www.neuron.yale.edu/) is designed for modeling individual neurons and networks of neurons, and is widely used by experimental and theoretical neuroscientists. It provides tools for conveniently building, managing, and using models that are numerically sound and computationally efficient. NEURON is particularly well-suited to problems that are closely linked to experimental data, especially those that involve cells with complex anatomical and biophysical properties. NEURON began in the laboratory of John W. Moore at Duke University, where he and Michael Hines started their collaboration to develop simulation software for neuroscience research. It has benefited from judicious revision and selective enhancement, guided by feedback from the growing number of neuroscientists who have used it to incorporate empirically-based modeling into their research strategies. Most papers that report work done with NEURON have addressed the operation and functional consequences of mechanistic models of biological neurons and networks. Readers who wish to see specific examples are encouraged to peruse the online bibliography. Working code for many published NEURON models can be downloaded from ModelDB.
- While traditionally the vertebrate spinal cord is often regarded as little more than a bundle of
motor and sensor axons together with a central column of stereotypical reflex circuits and pattern generators, there is evidence that the processing may be more complex (Berg, Alaburda et al., 2007) and that learning processes occur among spinal neurons (Crown, Ferguson et al., 2002). The networks responsible for standing and stepping are extremely flexible and unlikely to be hardwired (Cai, Courtine et al., 2006).
- This means that emulating just the brain part of the central nervous system will lose much
body control that has been learned and resides in the non‐scanned cord. On the other hand, it is possible that a generic spinal cord network would, when attached to the emulated brain, adapt (requiring only scanning and emulating one spinal cord, as well as finding a way of attaching the spinal emulation to the brain emulation). But even if this is true, the time taken may correspond to rehabilitation timescales of (subjective) months, during which time the simulated body would be essentially paralysed. This might not be a major problem for personal identity in mind emulations (since people suffering spinal injuries do not lose personal identity), but it would be a major limitation to their usefulness and might limit development of animal models for brain emulation.
- A similar concern could exist for other peripheral systems such as the retina and autonomic
nervous system ganglia.
- The human spinal cord weighs 2.5% of the brain and contains around 10‐4 of the number of
neurons in the brain (13.5 million neurons). Hence adding the spinal cord to an emulation would add a negligible extra scan and simulation load.
- Synapses are usually characterized by their “strength”, the size of the postsynaptic potential
they produce in response to a given magnitude of incoming excitation. Many (most?) synapses in the CNS also exhibit depression and/or facilitation: a temporary change in release probability caused by repeated activity (Thomson, 2000). This rapid dynamics likely plays a role in a variety of brain functions, such as temporal filtering (Fortune and Rose, 2001 ), auditory processing (Macleod, Horiuchi et al., 2007) and motor control (Nadim and Manor, 2000). These changes occur on timescales longer than neural activity (tens of milliseconds) but shorter than long‐term synaptic plasticity (minutes to hours). Adaptation has already been included in numerous computational models. The computational load is usually 1‐3 extra state variables in each synapse.
- Not all neuromodulators are known. At present about 10 major neurotransmitters and 200+
neuromodulators are known, and the number is increasing. (Thomas, 2006) lists 272 endogenous extracellular neuroactive signal transducers with known receptors, 2 gases, 19 substances with putative or unknown binding sites and 48 endogenous substances that may or may not be neuroactive transducers (many of these may be more involved in general biochemical signalling than brain‐specific signals). Plotting the year of discovery for different substances (or families of substances) suggests a linear or possibly sigmoidal growth over time (Figure 11).
- An upper bound on the number of neuromodulators can be found using genomics. About 800
G‐protein coupled receptors can be found in the human genome, of which about half were sensory receptors. Many are “orphans” that lack known ligands, and methods of “deorphanizing” receptors by expressing them and determining what they bind to have been developed. In the middle 1990’s about 150 receptors had been paired to 75 transmitters, leaving around 150‐200 orphans in 2003 (Wise, Jupe et al., 2004). At present, 7‐8 receptors are deorphanized each year (von Bohlen und Halbach and Dermietzel, 2006); at this rate all orphans should be adopted within ≈20 years, leading to the discovery of around 50 more transmitters (Civelli, 2005).
- Similarly guanylyl cyclase‐coupled receptors (four orphans, (Wedel and Garbers, 1998)),
tyrosine kinase‐coupled receptors (<<100, (Muller‐Tidow, Schwable et al., 2004)) and cytokine receptors would add a few extra transmitters.
- However, there is room for some surprises. Recently it was found that protons were used to
signal in C. elegans rhythmic defecation (Pfeiffer, Johnson et al., 2008) mediated using a Na+/H+ exchanger, and it is not inconceivable that similar mechanisms could exist in the brain. Hence the upper bound on all transmitters may be set by not just receptors but also by membrane transporter proteins.
- For WBE modelling all modulatory interactions is probably crucial, since we know that
neuromodulation does have important effects on mood, consciousness, learning and perception. This means not just detecting their existence but to create quantitative models of these interactions, a sizeable challenge for experimental and computational neuroscience.
Unknown Ion Channels
- Similar to receptors, there are likely unknown ion channels that affect neuron dynamics.
The Ligand Gated Ion Channel Database currently contains 554 entries with 71 designated as channel subunits from Homo sapiens (EMBL‐EBI, 2008; Donizelli, Djite et al., 2006). Voltage gated ion channels form a superfamily with at least 143 genes (Yu, Yarov‐Yarovoy et al., 2005). This diversity is increased by multimerization (combinations of different subunits), modifier subunits that do not form channels on their own but affect the function of channels they are incorporated into, accessory proteins as well as alternate mRNA splicing and post‐ translational modification (Gutman, Chandy et al., 2005). This would enable at least an order of magnitude more variants.
- Ion channel diversity increases the diversity of possible neuron electrophysiology, but not
necessarily in a linear manner. See the discussion of inferring electrophysiology from gene transcripts in the interpretation chapter.
Surrounding the cells of the brain is the extracellular space, on average 200 Å across and corresponding to 20% of brain volume (Nicholson, 2001). It transports nutrients and buffers ions, but may also enable volume transmission of signalling molecules.
- Volume transmission of small molecules appears fairly well established. Nitrous oxide is
hydrophobic and has low molecular weight and can hence diffuse relatively freely through membranes: it can reach up to 0.1‐0.2 mm away from a release point under physiological conditions (Malinski, Taha et al., 1993; Schuman and Madison, 1994; Wood and Garthwaite, 1994). While mainly believed to be important for autoregulation of blood supply, it may also have a role in memory (Ledo, Frade et al., 2004). This might explain how LTP (Long Term Potentiation) can induce “crosstalk” that reduces LTP induction thresholds over a span of 10 μm and ten minutes (Harvey and Svoboda, 2007).
- Signal substances such as dopamine exhibit volume transmission (Rice, 2000) and this may
have effect for potentiation of nearby synapses during learning: simulations show that a single synaptic release can be detected up to 20 μm away and with a 100 ms half‐life (Cragg, Nicholson et al., 2001). Larger molecules have their relative diffusion speed reduced by the limited geometry of the extracellular space, both in terms of its tortuosity and its anisotropy (Nicholson, 2001). As suggested by Robert Freitas, there may also exist active extracellular transport modes. Diffusion rates are also affected by local flow of the CSF and can differ from region to region (Fenstermacher and Kaye, 1988); if this is relevant then local diffusion and flow measurements may be needed to develop at least a general brain diffusion model. The geometric part of such data could be relatively easily gained from the high resolution 3D scans needed for other WBE subproblems.
- Rapid and broad volume transmission such as from nitrous oxide can be simulated using a
relatively coarse spatiotemporal grid size, while local transmission requires a grid with a spatial scale close to the neural scale if diffusion is severely hindered.
- For constraining brain emulation it might be useful to analyse the expected diffusion and
detection distances of the ≈200 known chemical signalling molecules based on their molecular weight, diffusion constant and uptake (for different local neural geometries and source/sink distributions). This would provide information on diffusion times that constrain the diffusion part of the emulation and possibly show which chemical species need to be spatially modelled.
- Recent results show that neurogenesis persists in some brain regions in adulthood, and might
have nontrivial functional consequences (Saxe, Malleret et al., 2007). During neurite outgrowth, and possibly afterwards, cell adhesion proteins can affect gene expression and possible neuron function by affecting second messenger systems and calcium levels (Crossin and Krushel, 2000). However, neurogenesis is mainly confined to discrete regions of the brain and does not occur to a great extent in adult neocortex (Bhardwaj, Curtis et al., 2006).
- Since neurogenesis occurs on fairly slow timescales (> 1 week) compared to brain activity and
normal plasticity, it could probably be ignored in brain emulation if the goal is an emulation that is intended to function faithfully for only a few days and not to exhibit truly long‐term memory consolidation or adaptation.
- A related issue is remodelling of dendrites and synapses. Over the span of months dendrites
can grow, retract and add new branch tips in a cell type‐specific manner (Lee, Huang et al., 2006). Similarly synaptic spines in the adult brain can change within hours to days, although the majority remain stable over multi‐month timespans (Grutzendler, Kasthuri et al., 2002; Holtmaat, Trachtenberg et al., 2005; Zuo, Lin et al., 2005). Even if neurogenesis is ignored and the emulation is of an adult brain, it is likely that such remodelling is important to learning and adaptation.
- Simulating stem cell proliferation would require data structures representing different cells
and their differentiation status, data on what triggers neurogenesis, and models allowing for the gradual integration of the cells into the network. Such a simulation would involve modelling the geometry and mechanics of cells, possibly even tissue differentiation. Dendritic and synaptic remodelling would also require a geometry and mechanics model. While technically involved and requiring at least a geometry model for each dendritic compartment the computational demands appear small compared to neural activity.
- Glia cells have traditionally been regarded as merely supporting actors to the neurons, but
recent results suggest that they may play a fairly active role in neural activity. Beside the important role of myelinization for increasing neural transmission speed, at the very least they have strong effects on the local chemical environment of the extracellular space surrounding neurons and synapses.
- Glial cells exhibit calcium waves that spread along glial networks and affect nearby neurons
(Newman and Zahs, 1998). They can both excite and inhibit nearby neurons through neurotransmitters (Kozlov, Angulo et al., 2006). Conversely, the calcium concentration of glial cells is affected by the presence of specific neuromodulators (Perea and Araque, 2005). This suggests that the glial cells acts as an information processing network integrated with the neurons (Fellin and Carmignoto, 2004). One role could be in regulating local energy and oxygen supply.
- If glial processing turns out to be significant and fine‐grained, brain emulation would have to
emulate the glia cells in the same way as neurons, increasing the storage demands by at least one order of magnitude. However, the time constants for glial calcium dynamics is generally far slower than the dynamics of action potentials (on the order of seconds or more), suggesting that the time resolution would not have to be as fine, making the computational demands increase far less steeply.
Loss of Instantaneous State
Every realistic method to scan brain tissue at a decent resolution deals only with structure, not continuous activity. Whatever information is stored as a pattern of brain activity instead of structure (Such as working memory) will be destroyed by the process. Information stored in the instantaneous state of neurons (Ion concentrations across membranes, synaptic vesicle depletion, and neurotransmitters in motion) would be lost. The most likely consequence would be memory loss up to some amount of time prior to the scanning.
Cases where people have woken up from long periods of electrocerebral silence prove that instantaneous brain activity is not required for the long term maintenance of personal identity.
|Feature||Likeliehood of necessity for WBE||Implementation Problems|
|Spinal cord||Likely||Minor. Would require scanning some extra tissue.|
|Synaptic adaptation||Very likely||Minor. Introduces extra state-variables and parameters that need to be set.|
|Currently unknown neurotransmitters and neuromodulators||Very likely||Minor. Similar to known transmitters and modulators.|
|Currently unknown ion channels||Very likely||Minor. Similar to known ion channels.|
|Volume transmission||Somewhat likely||Medium. Requires diffusion models and microscale geometry.|
|Body chemical environment||Somewhat likely||Medium. Requires metabolomic models and data.|
|Neurogenesis and remodelling||Somewhat likely||Medium. Requires cell mechanics and growth models.|
|Glia cells||Possible||Minor. Would require more simulation compartments, but likely running on a lower timescale.|
|Ephaptic effects||Possible||Minor. Would require more simulation compartments, but likely running on a slower timescale.|
|Dynamical state||Very unlikely||Profound. Would preclude most proposed scanning methods.|
|Quantum computation||Very unlikely||Profound. Would preclude currently conceived scanning methods and would require quantum computing.|
|Analog computation||Very unlikely||Profound. Would require analog computer hardware.|
|True randomness||Very unlikely||Medium to profound, depending on whether merely 'true' random noise or 'hidden variables' are needed.|
Sensory/Motor Neuron Map
The link between the body and the brain is in the spine, the most distinguishable part of the Peripheral Nervous System. The picture on the left shows a spinal nerve: The nerve fibers divide into two branches, or roots. The dorsal root carries sensory axons, while the ventral root carries motor axons: One is a receiver, and the other is a sender. Charles Bell, in 1810, performed tests on animals, cutting axon after axon and observing the resulting paralysis or loss of sensation, proving this.
This arrangement is species-generic ?REF, and it may only be necessary to trace a single person's axons to map the sensory/motor axons all the way up the spine into the brain, where the arrangement becomes more specialized for the individual. In any case: The sensory and motor neurons are just another number in the global table of neurons, but once they have been matched to their corresponding axons in the PNS (That is, once it is known exactly where they send signals to or receive signals from), one forms a table -- A map -- with a set of pointers to those neurons, categorized, for example, by region. For example, one particular neuron may be classified as "Motor: Second phalax muscle, index finger, left hand, left arm.".
- Virtual emulation system.png
The system for a completely virtual emulation.
- Embodied emulation system.jpg
The system for an embodied emulation.
This map links the sensory/motor neurons to their corresponding (Virtual) muscles and sensors. It's a simple way of abstracting away motor control, by providing simple, orderly access to the neurons responsible for it.
The Body Model
The virtual body can also be considered as means of abstraction: Instead of mapping the sensory/motor neuron map directly to a telepresence robot or to a virtual body, one could just pipe information to and from the individual virtual muscles and virtual sensors. For example, creating a virtual avatar that has points where collision checking is done. When a collision is detected, a pulse (Containing the information of the force vector, possibly) is piped to the corresponding sensor, that sends it to its corresponding neuron. The same is done with muscles, but viceversa: The nerve impulse is translated by the map from a pure impulse to, say, stress and force vectors in the virtual muscle. The muscle can then pipe this values to a software object that turns them into motion of a limb in the avatar.
The body simulation translates between neural signals and the environment, as well as maintains a model of body state as it affects the brain emulation.
How detailed the body simulation needs to be in order to function depends on the goal. An “adequate” simulation produces enough and the right kind of information for the emulation to function and act, while a convincing simulation is nearly or wholly indistinguishable from the “feel” of the original body.
A number of relatively simple biomechanical simulations of bodies connected to simulated nervous systems have been created to study locomotion. (Suzuki, Goto et al., 2005) simulated the C. elegans body as a multi‐joint rigid link where the joints were controlled by motorneurons in a simulated motor control network. Örjan Ekeberg has simulated locomotion in lamprey (Ekeberg and Grillner, 1999), stick insects (Ekeberg, Blümel et al., 2004), and the hind legs of cat (Ekeberg and Pearson, 2005) where a rigid skeleton is moved by muscles either modeled as springs contracting linearly with neural signals, or in the case of the cat, a model fitting observed data relating neural stimulation, length, and velocity with contraction force (Brown, Scott et al., 1996). These models also include sensory feedback from stretch receptors, enabling movements to adapt to environmental forces: locomotion involves an information loop between neural activity, motor response, body dynamics, and sensory feedback (Pearson, Ekeberg et al., 2006).
Today biomechanical model software enables fairly detailed models of muscles, the skeleton, and the joints, enabling calculation of forces, torques, and interaction with a simulated environment (Biomechanics Research Group Inc, 2005). Such models tend to simplify muscles as lines and make use of pre‐recorded movements or tensions to generate the kinematics.
A detailed mechanical model of human walking has been constructed with 23 degrees of freedom driven by 54 muscles. However, it was not controlled by a neural network but rather used to find an energy‐optimizing gait (Anderson and Pandy, 2001). A state of‐the‐art model involving 200 rigid bones with over 300 degrees of freedom, driven by muscular actuators with excitation‐contraction dynamics and some neural control, has been developed for modelling human body motion in a dynamic environment, e.g. for ergonomics testing (Ivancevic and Beagley, 2004). This model runs on a normal workstation, suggesting that rigid body simulation is not a computationally hard problem in comparison to WBE.
Other biomechanical models are being explored for assessing musculoskeletal function in human (Fernandez and Pandy, 2006), and can be validated or individualized by use of MRI data (Arnold, Salinas et al., 2000) or EMG (Lloyd and Besier, 2003). It is expected that near future models will be based on volumetric muscle and bone models found using MRI scanning (Blemker, Asakawa et al., 2007; Blemker and Delp, 2005), as well as construction of topological models (Magnenat‐Thalmann and Cordier, 2000). There are also various simulations of soft tissue (Benham, Wright et al., 2001), breathing (Zordan, Celly et al., 2004) and soft tissue deformation for surgery simulation (Cotin, Delingette et al., 1999).
Another source of body models comes from computer graphics, where much effort has gone into rendering realistic characters, including modelling muscles, hair and skin. The emphasis has been on realistic appearance rather than realistic physics (Scheepers, Parent et al., 1997), but increasingly the models are becoming biophysically realistic and overlapping with biophysics (Chen and Zeltzer, 1992; Yucesoy, Koopman et al., 2002). For example, 30 contact/collision coupled muscles in the upper limb with fascia and tendons were generated from the visible human dataset and then simulated using a finite volume method; this simulation (using one million mesh tetrahedra) ran at a rate of 240 seconds per frame on a single CPU Xeon 3.06 GHz (on the order of a few GFLOPS) (Teran, Sifakis et al., 2005). Scaling this up 20 times to encompass ≈600 muscles implies a computational cost on the order of a hundred TFLOPS for a complete body simulation.
Physiological models are increasingly used in medicine for education, research and patient evaluation. Relatively simple models can accurately simulate blood oxygenation (Hardman, Bedforth et al., 1998). For a body simulation this might be enough to provide the right feedback between exertion and brain state. Similarly simple nutrient and hormone models could be used insofar a realistic response to hunger and eating were desired.
See: Sensory Augmentation
Vision Visual photorealism has been sought in computer graphics for about 30 years, and this appears to be a fairly mature area at least for static images and scenes. Much effort is currently going into such technology, for use in computer games and movies.
(McGuigan, 2006) proposes a “graphics Turing test” and estimates that for 30 Hz interactive visual updates 518.4‐1036.8 TFLOPS would be enough for Monte Carlo global illumination. This might actually be an overestimate since he assumes generation of complete pictures. Generating only the signal needed for the retinal receptors (with higher resolution for the fovea than the periphery) could presumably reduce the demands. Similarly, more efficient implementations of the illumination model (or a cheaper one) would also reduce demands significantly.
The full acoustic field can be simulated over the frequency range of human hearing by solving the differential equations for air vibration (Garriga, Spa et al., 2005). While accurate, this method has a computational cost that scales with the volume simulated, up to 16 TFLOPS for a 2×2×2 m room. This can likely be reduced by the use of adaptive mesh methods, or ray‐ or beam‐tracing of sound (Funkhouser, Tsingos et al., 2004).
Sound generation occurs not only from sound sources such as instruments, loudspeakers, and people but also from normal interactions between objects in the environment. By simulating surface vibrations, realistic sounds can be generated as objects collide and vibrate. A basic model with N surface nodes requires 0.5292 N GFLOPS, but this can be significantly reduced by taking perceptual shortcuts (Raghuvanshi and Lin, 2006; Raghuvanshi and Lin, 2007). This form of vibration generation can likely be used to synthesize realistic vibrations for touch.
Smell and Taste
So far no work has been done on simulated smell and taste in virtual reality, mainly due to the lack of output devices. Some simulations of odorant diffusion have been done in underwater environments (Baird RC, Johari H et al., 1996 ) and in the human and rat nasal cavity (Keyhani, Scherer et al., 1997; Zhao, Dalton et al., 2006). In general, an odor simulation would involve modelling diffusion and transport of chemicals through air flow; and the relatively low temporal and spatial resolution of human olfaction would likely allow a fairly simple model. A far more involved issue is what odorant molecules to simulate. Humans have 350 active olfactory receptor genes, but we can likely detect more variation due to different diffusion in the nasal cavity (Shepherd, 2004).
No work has been done on simulating smell and taste in a virtual environment, most likely due to lack of output for this data. The low quality of human olfaction would allow for a simple model.
Taste relies only on a few types of receptor, but the tongue also detects texture and the placement of then nose forces on to smell objects entering the mouth. The former may require complex simulations of the physics of virtual objects in the case of virtual environments, and pressure/temperature sensors for simulacra.
The haptic senses of touch, proprioception, and balance are crucial for performing skilled actions in real and virtual environments (Robles‐De‐La‐Torre, 2006).
Tactile sensation relates both to the forces affecting the skin (and hair) and to how they are changing as objects or the body are moved. To simulate touch, stimuli collision detection is needed to calculate forces on the skin (and possibly deformations) as well as the vibrations when it is moved over a surface or exploring it with a hard object (Klatzky, Lederman et al., 2003). To achieve realistic haptic rendering, updates in the kilohertz range may be necessary (Lin and Otaduy, 2005). In environments with deformable objects various nonlinearities in response and restitution have to be taken into account (Mahvash and Hayward, 2004). Proprioception, the sense of how far muscles and tendons are stretched (and by inference, limb location) is important for maintaining posture and orientation. Unlike the other senses, proprioceptive signals would be generated by the body model internally. Simulated Golgi organs, muscle spindles, and pulmonary stretch receptors would then convert body states into nerve impulses.
The balance signals from the inner ear appears relatively simple to simulate, since it is only dependent on the fluid velocity and pressure in the semicircular channels (which can likely be assumed to be laminar and homogeneous) and gravity effects on the utricle and saccule. Compared to other senses, the computational demands are minuscule.
Thermoreception could presumably be simulated by giving each object in the virtual environment a temperature, activating thermoreceptors in contact with the object. Nocireception (pain) would be simulated by activating the receptors in the presence of excessive forces or temperatures; the ability to experience pain from simulated inflammatory responses may be unnecessary verisimilitude.
The Environment Model
Map of Technological Capabilities
Driving forces for the development of the technology:
- Moore's Law
- WBE Specific?
|Scanning||Preprocessing/fixation||Preparing brains appropriately, retaining relevant microstructure and state.||See Cryonics, Plastination, but overall fairly good.|
|Physical handling||Methods of manipulating fixed brains and tissue pieces before, during, and after scanning.||ATLUM automates slicing, and there's been some work in automating the feeding of tape into a machine that sprays contrast metals on it and then passes it under an electron microscope.|
|Imaging||Volume||Capability to scan entire brain volumes in reasonable time and expense.||Massively parallel ATLUM and massively parallel scanning electron microscopes. The latter are being developed by the semiconductor industry.|
|Resolution||Scanning at enough resolution to enable reconstruction.||Electron microscopy has provided sub-nanometer resolution since before the 1950's. Electron microscopes at that resolution may cost well above half a million dollars, though.|
|Functional information||Scanning is able to detect the functionally relevant properties of tissue.||The software to translate electron micrographs to abstract models is currently in very early stages, capable of tracing the shape of cells, but not much more.|
|Translation||Image processing||Geometric adjustment||Handling distortions due to scanning imperfection.||Can't be much of a situation (Some basic image recognition to sense overlap and matching of the electron micrographs).|
|Data interpolation||Handling missing data (Using surrounding data to interpolate what should be placed in missing spots).||Unknown.|
|Noise removal||Improving scan quality.||Can't be much of a situation.|
|Tracing||Detecting structure and processing it into a consistent 3D model of the tissue.||Doable right now. Shape tracing is possibly the simplest, cheapest part of the whole process.|
|Scan interpretation||Cell type identification||Identifying cell types.||The software to translate electron micrographs to abstract models is currently in very early stages, capable of tracing the shape of cells, but not much more.|
|Synapse identification||Identifying synapses and their connectivity.||The software to translate electron micrographs to abstract models is currently in very early stages, capable of tracing the shape of cells, but not much more.|
|Parameter estimation||Estimating functionally relevant parameters of cells, synapses, and other entities.|
|Databasing||Storing the resulting inventory in an efficient way.||This is essentially a hardware problem. The scan of a mere nematode produces whole terabytes of electron micrographs. There have to be stored for interpretation (Unless one interprets them during the scan), but the abstract model may be much lighter and easier to store.|
|Software model of neural system||Mathematical model||Model of entities and their behaviour (The simulator itself).||Pick one.|
|Efficient implementation||A final, fast implementation of the model (For example, neuromorphic hardware or dedicated chips where the algorithms are implemented directly in the hardware).||The Izhikevich model, with considerations. (Actually, Izhikevich already analyzed the implementation of the model in MEMS).|
|Emulation||Storage||Storage of original model and current state (And whatever snapshots may be made).||Again, a hardware problem.|
|Bandwidth||Efficient inter‐processor communication (Long-range data buses to implement long-range axons).||Unsure.|
|CPU||Processor power to run simulation. Moore's Law can't be expected to continue beyond the first half of this century. Processors can't exponentiate forever, because of this, alternative computing has to be used: Instead of running the simulation as a program in a Universal Computer (An ordinary computer), it should be done with Neuromorphic hardware: The model is implemented directly as hardware. One chip, one neuron (Or one compartment, or one minicolumn, as the case may be), mounted on some kind of routing system.||Not even close... Well, technically we can already emulate an entire brain of Izhikevich neurons, and the original emulation was done on a Beowulf with 27 processors, so a bigger, badder computer could probably get the slowdown factor a little into the acceptable zone.|
|Body model||Simulation of body enabling interaction with virtual environment or through robot.||Shouldn't be much of a situation.|
|Environment model||Virtual environment for virtual body.||Can't be much of a situation, if it's visual only. A collision checker may provide some basic pressure to sensory neurons.|
|Exoself||A software object that holds the simulation, maps sensory/motor neurons to the body model, ties the model to a virtual body in the virtual environment or to a telepresence robot, and handling communication with the network and the operating system.||Once the above are ready, write a wrapper that puts them all together, and you have an Exoself.|
- Superior ATLUM.
- Massively parallel electron microscopes.
- Fractal wires with RF nodes at the tips that can be injected through the Circle of Willis into the blood vessels of the brain, hopefully without halting blood flow. Quite Speculative.
The notion of mind uploading assumes that the mind arises from activity in the brain. This notion is known as "materialism," as opposed to "dualism," which supposes that the mind (or "soul") exists separately from the brain, but connected to it in some fashion. Descartes, for example, believed that the mind communicated with the body through the pineal gland.
The evidence for materialism is overwhelming. Nearly any aspect of the mind -- temperament, memories, appetite, and so on -- can be disrupted by damage to specific areas of the brain. Modern brain imaging techniques can even detect brain activity correlated with thought.
Nonetheless, the question of exactly how consciousness arises from unconscious matter is a nontrivial problem. Some scientists have suggested that consciousness is a fundamental aspect of the universe, and directly related to information processing; others believe it requires a very particular arrangement of neural circuits, resulting as a by-product of our evolutionary history. Clark Dorman has prepared an excellent review of some books on consciousness and the brain.
The central 'contradiction' of WBE is simultaneously believing that:
- Consciousness and cognitive processes have an entirely material basis, and that there are no spiritual or non-physical factors to the mind.
- That this biological foundation can be removed, with the behavior conserved in some other computational substrate.
Continuity of Consciousness
The bottom line is that, when the rubber hits the ground, these virtual models are pretty close to being phenomenologically equivalent to real nervous systems.
Once the brain is in a computer, you can do anything to it. Any analysis and modification becomes possible, in a reversible manner. No surgery, no need for expensive, biocompatible implants and clumsy electrodes. Every bit in the brain can be accessed with no negative effects on the rest of the 'tissue'. You can run all sorts of analysis algorithms, find common patterns among different brains, identify functional clusters that partake in different cognitive functions with the ultimate resoltion, unlike MRI or EEG machines that blur the finer details into overall patterns. You can also make any change that you want: Delete whole regions, or copy and paste them from one brain to the other, grow a few dozen millimeters of cortex without care for skull size.
Human Connectome Project
Blue Brain Project
Main article: Brain Corporation
Brain Corporation is a research company developing neuromorphic machines (Software and hardware), with a focus on neuromorphic computer vision, motor control and autonomous robotics.
Large-Scale Emulations So Far
Simulations marked blue in are human-brain level.
|Year||Authors||Type||# of Neurons||# of Synapses||Hardware and Software||Slowdown factor||Timestep||Notes|
|2007||Hans E. Plesser, Jochen M. Eppler, Agibail Morrison, Markus Diesmann, and Marc-Oliver Gewaltig.||Integrate-and-fire||12,500||1.56x107||Sun X4100 cluster, 2.4Ghz AMD Opteron, 8GB RAM, MPI, NEST||2||0.1 ms||Supralinear scaling until 80 virtual processes for 105 neuron/109 synapses.|
|2006||ikael Djurfeldt, Mikael Lundqvist, Christopher Johansson, Örjan Ekeberg, Martin Rehn, Anders Lansner.||6-compartment neurons||2.2x107||1.1x1030||IBM Warson Research Blue Gene, SPLIT||5,942||Unknown||Computing requirements: 11.5 TFLOPS. Spatial delays corresponding 16 cm2 cortex surface.|
|2005||Izhikevich||Izhikevich neurons, Random synaptic connectivity generated on the fly. (On the fly as in, at every step. It was not generated at the beginning and then stored, because the matrix would've been 10,000 terabytes).||1011||1015||Beowulf cluster, 27 3GHz processors||4.2x106 (50 days of real time to simulate one second of activity)||1 ms||Observed Alpha and Gamma rhythms. Holy Toledo!|
|2000||Howell, Dyhrfjeld-Johnsen et al.||Compartment (Purkinje 4,500 comp./granule 1 comp.)||16 Purkinje, 244,000 granule cells||Unknown||128-processor Cray 3TE, PGENESIS||4,500||Unknown||Computing requirements: 76.8 GFLOPS|
|2000||Howell, Dyhrfjeld-Johnsen et al.||Unknown||60,000 granule cells, 300 mossy fibers, 300 Golgi cells, 300 stellate cells, 1 Purkinje cell||Unknown||Single workstation 1GB memory, GENESIS||79,200||20 μs||None|
|2007||Kozlov, Lansner et al.||5 compartment model||900||10% per hemisegment||SPLIT||Unknown||100 μs||None|
|2007||Frye, Ananthanarayanan et al.||Low complexity spiking like Izhikevich neurons, STDP synapses||8x106||6300*8x106 = 5.04x1010||4096-processor BlueGene/L 256 Mb per CPU||10||1 ms||None|
|2007-2005||Brette, Rudolph et al.; Traub, Contreras et al.||14 cell types, conductance and compartment model||3,560||3,500 gap junctions, 1,122,520 synapses||Cray XT3 2.4 Ghz, 800 CPUs||200?||Event driven||5954-8516 equations/CPU Gives # compartments|
|2005||Traub, Contreras et al.||14 cell types, conductance and compartment model, 11 active conductances||3,560||3,500 gap junctions, 1,122,520 synapses||14 cpu Linux cluster, an IBM e1350, dual-processor Intel P4 Xeon, 2.4-GHz, 512 Kbytes of L2 cache; or, e1350 Blade 1, at 2.8 GHz.||Unknown||2μs||None|
|2001||Goodman, Courtenay Wilson et al.||3 layer cortical column, 25% GABAergic, conductance models, synapse reversal pot., conductance, abs. str., mean prob. release, time const. recover depress facilitation,||5,150 per node, max 20*5,150=103, 000||In columns 192,585 per node, max 20 = 3.85x106, intercolumn 1.4x106||NCS, 30 dual-Pentium III 1GHz processor nodes, with 4 GB of RAM per node||165,000||Unknown||None|
|2004||Frye||Four cell types. Km, Ka, Kahp channels||1,000/cells column, 2,501 columns = 2.5x106||250K per column, total 6.763x108||BlueGene, 1024 CPUs||3000?||Unknown||2.338x1010 spikes per processor. Memory requirement 110 Mb per processor|
|2000||Aberdeen, Baxter et al.||ANN feedforward network trained with iterative gradient descent||4,083||1.73x106||Cluster 196 Pentium III processors, 550 MHz, 384Mb||Unknown||Unknown||None|
|1996||Kondo, Koshiba et al.||Hopfield network||1,536||2.4x1013||Special purpose chips||Unknown||Unknown||None|
|2003||Ccortex||Layered distribution of neural nets, detailed synaptic interconnections, spiking dynamics||2x1010||2x1013||500 nodes, 1,000 processors, 1 Tb RAM. Theoretical peak of 4,800 Gflops.||Unknown||Unknown||Claimed performance, not documented.|
|2002||Harris, Baurick et al.||Point neurons, spikes, AHP, A & M potassium channels, synaptic adaptation, Spike shape and postsynaptic conductance specified by templates.||35,000 cells per node||6.1x106 synapses per node||Cluster, 128 Xeon 2.2 GHz processors with 256GB, 1Tb disk||Unknown||Unknown||None|
|2003||Mehrtash, Jung et al.||Integrate and fire, event driven pulse connections with STDP plasticity. 1-8 dendritic segments.||256,000||106||Special purpose chip, connected UltraSparc processor 500 Mhz, 640 Mb||1021.21 timesteps/ms||Unknown||None|
|2002||Shaojuan and Hammarstrom||Palm binary network||256,000||Order of 3.2x1010||512 processors, 192 GB, 250 Mhz mips 12K processors||Execution time 4 min 7 sec, -13% MPI calls for retrieving 180K training vectors, ~O(10) iterations.||Unknown||None|
|2007||Johansson and Lansner||BCPNN spiking, minicolumns hypercolumns||1.6x106||2.0x1011||256 node, 512 processors, Dell Xeon cluster, 3.4GHz 8 Gb memory, peak performance 13.6 Gflop||9% real-time = 11.1||Unknown||Weight updates 47 ms, update activities 59 ms|
|2006||Markram||Morphologically complex compartment models||10,000||108||BlueGene/L, 4096 nodes, 8192 cpus, peak performance 22.4 TFLOPS||Unknown||Unknown||Not documented?|
|1982||Traub and Wong||Unknown||100||Unknown||Unknown||Unknown||Unknown||None|
|1992||Traub, Miles et al.||19-compartment cells, ionic currents.||1,200||70,000||IBM3090 computer||325 min/1.5s bio time = 13,000||0.05 ms||None|
|1997||Moll and Miikkulainen||Binary units with binary weights||79,500||7.82x108||Cray Y-MP 8/864||0.006s (retrieval of 550,000 patterns; assuming human retrieval speed = 1 s)||Unknown||None|
A synaptic spike is like a FLOP
Most calculations of the computing power of the brain are based on synaptic spikes. These are assumed to be the basic, atomistic interactions over which everything else is built. Thus, if one knows the number of synapses and the mean number of spikes, one could measure the computational capacity of the brain in "Synapse operations per second" or "Spikes per second". See, for example, Moravec.
The problem with these models is that they only take spikes into account. Measuring computational capacity in "spikes per second" only measures the computational capacity of the brain if its neurons were to be shaped into biological, computer circuitry. It does not measure all the underlying computation that takes place to determine whether or not there'll be an spike.
Consider gene expression: It is fairly low level, it has nothing to do with spikes, and it is required for neuroplasticity; the basis of learning. Neural structures can't be modified or created without neuroplasticity. An upload wouldn't be able to learn without neuroplasticity, which means that a spike-only model of the brain would need special considerations for gene expression -- And that's a toughie, FLOPS-wise. Other things, like the chemical environment of the brain, would also need to be included in the model. In short, while spiking models are accurate for instantaneous activity, long-term changes and neuroplasticity require the lower-level events to be taken into account. Moreover, purely spiking models are still very computationally complex, and the computations underlying a single spike in such models are far, far more than a single FLOP.
There exist, however, computationally cheap models that are very accurate. See the Izhikevich model of spiking neurons. However, this model consists of two differential equations. These are the underlying computations leading up to the spike, and they are sufficiently complex to destroy the idea that a synaptic spike is anything like a floating-point operation.
IBM has emulated the brain of a cat
No, no, no, no, no. Absolutely not.
We can send nanobots to scan the brain
The Blue Brain Project is trying to emulate the mind
The Blue Brain Project is a pharmacological simulation to run drug trials and obtain a better understanding of the brain. It is not an uploading effort and none of its participants make any guarantee that it will produce a mind.
What do we need to make this happen?
- Some form of plastination that preserves the most features.
- See Brain Preservation Foundation and Brain Preservation Technology Prize
- Vitrification + cryoultramicrotomes?
- Faster scanning
- Divide the brain into smaller sections, give them to more machines instead of building faster scanners
- Hemispherectomy + serial sagittal cuts -> Each cut into a single ATLUM ? (lol I wrote hemispherectomy as hysterectomy)
- Massively parallel electron microscopes + parallel ATLUMs
- Divide the brain into smaller sections, give them to more machines instead of building faster scanners
- A hardware implementation of the Izhikevich model of spiking neurons that fits in a couple microns
- Probably need Molecular Manufacturing to get neuromorphic chips with 1011 neurons.
- Too bad it's not happening any time soon
- Alternative, high-density computing systems that don't require mol nano are... Bio computers :)
- You could escape this if you didn't need to have the computers in the upload's physical proximity, ie cloud computing
- Expensive and unsafe
Neuroscience: Exploring the Brain by Mark F. Bear, Barry W. Connors, Michael A. Paradiso, 2007
Theoretical Neuroscience: Computational and Mathematical Modeling of Neural Systems by Peter Dayanm, L.F. Abbott, 2001
The Computational Brain by Patricia Churchland, Terrence J. Sejnowski, 1992
Dynamical Systems in Neuroscience: The Geometry of Excitability and Bursting by [[Eugene Izhikevich]], 2007
A unique contribution to the theoretical neuroscience literature that can serve as a useful reference for audiences ranging from quantitatively skilled undergraduates interested in mathematical modeling, to neuroscientists at all levels, to graduate students and even researchers in the field of theoretical neuroscience.- Jonathan E. Rubin Mathematical Review
The NEURON Book by Michael L. Hines, 2006
- An Introduction to Mind Uploading and its Concepts by Randal Koene (Web archive, highly dated).
- WBE Intro slides by Randal Koene
- Ghosts, computers, and Chinese Whispers by Toby Howard, about the "soulcatcher chip" hoax.
- The Society of Neural Prosthetics and Whole Brain Emulation Science
- Mind Uploading Home Page, by Joe Strout. A very good, if a little dated, resource.
- Energy Limits to the Computational Power of the Human Brain by Ralph Merkle.
- How Many Bytes in Human Memory? by Ralph Merkle.
- Dualism Through Reductionism by Hans Moravec. Does personhood, consciousness and intelligence reside in the pattern or substrate?
- Of course, one can make the objection that pattern and substrate are inseparable.
- Whole Brain Emulation: The Logical Endpoint of Neuroinformatics? by Anders Sandberg (Google Tech Talk).
- Fesability of whole brain emulation by Anders Sandberg.
- Anders Sandberg: "Technical Roadmap for Whole Brain Emulation" by Anders Sandberg at the 2009 Singularity Summit.
- Substrate Independent Minds: Technical Challenges by Randal Koene.
In Popular Culture
- The 'Copies' in Greg Egan's Permutation City are uploads, although they were scanned by tomography and other non-destructive means, meaning the original survived, giving rise to all sorts of philosophical nonsense.
- Everything by Greg Egan
- In David Zindell's A Requiem for Homo Sapiens series, the Architects of the Cybernetic Universal Church are 'vastened' after their deaths.
In addition to the Glossary of h+ terminology, the following neuroscience glossary is taken from the Mind Uploading Home Page and the WBE Roadmap:
- Action Potential - A type of neuron output signal which does not lose strength over long distances. Commonly called "nerve impulse", and when an action potential occurs, the neuron is often said to "fire an impulse".
- AFM - Atomic Force Microscope.
- ANN - Artificial Neural Network, a mathemtical, abstract model based on biological neural networks.
- ATLUM - Automatic Tape-Collecting Lathe Utramicrotome.
- Autoregulation - Regulation of blood flow to maintain the (cerebral) environment.
- Axon - Projection from a nervcell that conducts signals away from the neuron’s cell body.
- Axon - A long "branch" of a neuron which carries the output signal (action potential) from the cell. A single axon may be as long as a meter. Contrast dendrite.
- Blockface - The surface of an imaged sample, especially when cutting takes place.
- Bouton - The typical synaptic bump that grows between axons and dendrites.
- Central Nervous System (CNS) - The set of neurons, fibers, and support structures within the skull and spine. Contrast peripheral nervous system.
- CNS - Central Nervous System
- Confocal microscopy - Optical imaging technique to image 3D samples by making use of a spatial pinhole.
- Connectome - The total set of connections between regions or neurons in a brain.
- Dendrite - A "branch" of a neuron which conducts voltages to transfer information from one part of a cell to another. Dendrites typically serve an input function for the cell, but many dendrites also have output functions. Contrast axon.
- Electron Microscopy - In general, any form of high-resolution imaging which uses a beam of electrons to probe the sample. Practical resolution for biological samples is about 2 nm, though the theoretical limit is about an order of magnitude higher.
- Exaflop - 1018 FLOPS.
- Extracellular - Outside the cell.
- FIBSEM - Focused Ion Beam SEM.
- FLOPS - Floating-Point Operations Per Second. A measure of computing speed useful in scientific calculations.
- Fluorophore - A molecule or part of molecule that causes fluorescence when irradiated by UV light. Used for staining microscope preparations.
- FPGA - Field‐Programmable Gate Array. Semiconductor device that contains programmable logic and interconnects, allowing the system designer to set up the chip for different purposes.
- GABAergic - Related to the transmission or reception of GABA, the chief inhibitory neurotransmitter.
- Gap Junction - A place where the cell membranes of two cells come in direct contact, allowing electrical potentials to be conducted directly from one to another. Contrast synapse.
- GFLOP - Gigaflop, a billion FLOPS.
- Glia - Cells in nervous system which are not neurons, but serve various support functions (e.g., provide myelin for axons, clean up after cell damage or death, etc.). Some evidence indicates that certain types of glia (esp. astrocytes) may serve information-processing roles as well.
- Hypercolumn - A group of cortical minicolumns organised into a module with a full set of values for a given set of receptive field parameters.
- Interneuron - In the CNS, a small locally projecting neuron (unlike neurons that project to long‐range targets) that is not motor or sensory.
- Kinase - An enzyme that phosphorylates other molecules.
- Ligand - A molecule that bonds to another molecule, such as a receptor or enzyme.
- Metabolome - The complete set of small‐molecule metabolites that can be found in an organism.
- MFLOPS - Millions of Floating point Operations Per Second. A measure of computing speed.
- Microtubule - A component of the cell skeleton, composed of smaller subunits (Tubulin molecules), which serves as a framework of structural support.
- Minicolumn - A vertical column through the cerebral cortex; a physiological minicolumn is a collection of about 100 interconnected neurons, a functional minicolumn consists of all neurons that share the same receptive field.
- MIPS - Millions of Instructions Per Second. A measure of computing speed.
- Motor neuron - A neuron involved in generating muscle movement.
- MRI - Magnetic Resonance Imaging.
- Myelin - A fatty substance which surrounds many axons, enabling them to conduct action potentials more quickly.
- Neocortex - The cerebral cortex, covering the cerebral hemispheres. Neocortex distinguishes it from related but somewhat more “primitive” cortex that have fewer than six layers.
- Neurite - A projection from the cell body of a neuron, in particular from a developing neuron where it may become an axon or dendrite.
- Neurogenesis - The process by which neurons are created from progenitor cells.
- Neuromodulator - A substance that affects the signalling behaviour of a neuron.
- Neuromorphic - Technology aiming at mimicking neurobiological architectures.
- Neuron - A cell specialized for processing information. A typical neuron has many input branches (dendrites) and one output branch (axon), though there are many exceptions. The human brain contains roughly 10^12 neurons.
- Neuropeptide - A neurotransmitter that consists of a peptide (amino acid chain).
- Neurotransmitter - A chemical that relays, amplifies or modulates signals from neurons to a target cell (such as another neuron).
- Parallelization - The use of multiple processors to perform large computations faster.
- PCR - Polymerase Chain Reaction, a technique for amplifying DNA from a small sample.
- Peripheral Nervous System (PNS) - The set of neurons, nerve fibers and support structures outside the brain and spine. Contrast central nervous system.
- Petaflop - 1015 FLOPS.
- Phosphorylation - Addition of a phosphate group to a protein molecule (or other molecule), usually done by a kinase. This can activate or deactivate the molecule and plays an important role in internal cell signalling.
- PNS - Peripheral Nervous System.
- Potentiation - The increase in synaptic response strength seen after repeated stimulation.
- SBFSEM - Serial Block‐Face Scanning Electron Microscopy.
- SEM - Scanning Electron Microscopy.
- Sigmoidal - S‐shaped, usually denoting a mathematical function that is monotonously increasing, has two horizontal asymptotes and exactly one inflection point.
- Skeletonization - Image processing method where a shape is reduced to the set of points equidistant from its boundaries, representing its topological structure.
- Soma - The cell body of a neuron.
- Spectromicrosopy - Methods of making spectrographic measures of chemical composition in microscopy.
- SSET - Serial Section Electron Tomography
- SSTEM - Serial Section Transmission Electron Microscopy
- Supervenience - A set of properties A supervenes on a set of properties B if and only if any two objects x and y that share all their B properties must also share all their A properties. Being B‐indiscernible implies being A‐indiscernible.
- Synapse - A chemically mediated connection between two neurons, so that the state of the one cell affects the state of the other. (In some cases, a cell makes a synapse onto itself.) Synapses typically occur between an axon and a dendrite, though many other arrangements also occur.
- Synaptic spine - Many synapses have their boutons offset from their parent dendrite through a thinner filament.
- TEM - Transmission Electron Microscopy.
- TFLOPS - Teraflops, 1012 FLOPS.
- Tortuosity - A measure of how many turns a surface or curve make.
- V1 - The primary visual cortex.
- Voxel - A volume element, representing a value on a regular grid in 3D space.
- Hayworth, Kashturi et al., 2006; Hayworth, 2007
- McCormick, 2002a; McCormick, 2002b)
- McCormick, Koh et al., 2004
- Koh and McCormick, 2003
- Holst and Allison, 1997
- Doak, Grisenti et al., 1999
- Oberst, Kouznetsov et al., 2005
- Shimizu and Fujita, 2002
- Kouznetsov, Oberst et al., 2006
- Frank, 1992
- Penczek, Marko et al., 1995
- Lučić, Förster et al., 2005
- Tsang, 2005
- White, Southgate et al., 1986
- Johansson C., Landser A. "Towards cortex sized artificial neural systems". Neural Networks, 20: 48‐61, 2007.
- Schmitt O, Modersitzki J, Heldmann S, Wirtz S, and Fischer B. "Image registration of sectioned brains". International Journal of Computer Vision, 73: 5‐39, 2007.
- Mayerich D, McCormick BH, and Keyser J. "Noise and artifact removal in knife‐edge scanning microscopy". In Piscataway N. (Ed.), Proceedings of 2007 ieee international symposium on biomedical imaging: From nano to macro, IEEE Press, 2007.
- Kwon J, Mayerich D, Choe Y, McCormick BH. "Automated lateral sectioning for knife‐edge scanning microscopy". In IEEE international symposium on biomedical imaging: From nano to macro, 2008.
- Elixson, 1991
- Hans E. Plesser, Jochen M. Eppler, Agibail Morrison, Markus Diesmann, and Marc-Oliver Gewaltig. "Efficient parallel simulation of large‐scale neuronal networks on clusters of multiprocessor computers". Kermarrec A.‐M., Bougé L., and Priol T.: 672‐681, 2007. Available here
- Mikael Djurfeldt, Mikael Lundqvist, Christopher Johansson, Örjan Ekeberg, Martin Rehn, Anders Lansner. "Project report for Blue Gene Watson Consortium Days: massively parallel simulation of brain- scale neuronal network models.". KTH - School of Computer Science and Communication and Stockholm University, 18 October 2006 Available here.
- When will computer hardware match the human brain?, Hans Moravec, December 1997. Available here.