Opinions expressed in this commentary are solely those of MRD.
Computationalism is the position in the philosophy of mind that the human mind or the human brain (or both) is an information processing system and that thinking is a form of computing. Computationalism argues that the relationship between mind and body is similar or identical to the relationship between software and hardware and thus may be a solution to the mind-body problem.
Churchland has focused on the interface between neuroscience and philosophy.
The fundamental questions addressed in cellular neuroscience include the mechanisms of how neurons process signals physiologically and electrochemically.
Cells communicate with each other via direct contact (juxtacrine signaling), over short distances (paracrine signaling), or over large distances and/or scales (endocrine signaling).
See the MAPK/ERK pathway article for details.
By now, hopefully, you noticed this blog article is a bit meandering. That's because I didn't write it. It, the image, and even most of the title (I added the "Really" bit to the title). They weren't even written by a being! Rather, it was the result of an algorithm and implemented by Darius Kazemi. The algorithm is simple, extracts from Wikipedia, etc., and truly lacks what's come to be called AI. None the less it isn't without the same point of what such sophistication can accomplish - eliminating the need for a truly intelligent being to do something because intelligence may not be required. As it appears many articles are written for nothing more than to get your eyes onto an adjacent advertisement, an unsophisticated blog bot will do just fine. Note: we haven't any advertisements on this page :-)
The subject of automation, inclusive or not of true AI, is rather large and could easily and quickly be written about for longer than one would care to read. But suffice it to say, it's here to stay, just as computers once were and with the same skepticism by some as possibly a fad.
There's much concern about jobs being eliminated, machines taking over, etc. The former are probably jobs people don't want or shouldn't do anyway. And the latter could probably be kept in check by ensuring machines are never able to control or learn of their own power source. That is, they take it for granted, because it is.
Keeping faithful to the title automated for this blog, "Everything You Need to Know About AI", is really that we don't know what will come of it. Not at this point anyway, because humans are and what we do with something (e.g. AI) is too unpredictable. My guess is humans will remain in the loop of automation somewhere; either front and center in orchestration, like this blog, or backstage somewhere. And they should be, to pull the plug if needed.