Technology, the Brain, Networkism, and Heroism

Excellent points Jim, all very relevant.

I agree completely there are some things we have to do for ourselves. I
think making art is one of these. I frequently think of this using the
metaphor, as you state below, of eating. If you get someone or something
else to eat for you, you haven't eaten.

Also, if someone or something else makes a choice for you, you haven't made
a choice. Making choices is required for learning, so to develop one must
be free if within limits.

As to a new intellectual era where we crack the brain's code, I remain
skeptical. This would still not remove the tasks we have to work on
ourselves to develop. There's a tough kernel of personal responsibility for
aesthetic growth that no external machine or knowledge can take care of for
us.

I may be misinterpreting, but even if we discover the brain's code and/or
invent a machine to create wisdom synthetically and dump a copy of into us,
we still won't have become wise–we'll have been replaced.

Thus the issue is to do the work of wisdom and understanding, which people
avoid because it's hard and scary, not because we lack the technical data of
how to do it.

Or am I misreading you? Do you think knowing how the brain codes info will
make people wiser? That could very well be. I think such technological
solutions to human misery are all to the good. But they're not the same as
human solutions, which I think are more important in the long run. After
all, once we knew the brain code we could just as easily use it for "killing
and moronics" as for understanding, correct? Again back to Wiener, making
choices, morall judgment, aesthetic freedom, aesthetic evolution–not merely
technological evolution.

Or from my essay on Groote and the Preface for this year's conference:

As Wordsworth states in the 'Preface,

Comments

, Jim Andrews

Rob's post was good, as usual, I thought, in various things, but especially
in stressing that what we see now and what we're going to see in the future,
concerning computer art and other activities involving computers, is that
they don't replace humans, usually, unless the activity is particularly
well-suited to what computers do well. Instead, we see people using
computers to help them do this or that. Or to do something in a very
different way that may shed new light on various things, such as Rob's
computer culinary connoisseur, which sounds like a lot of fun and
interesting also. Very humorous and possibly also enlightening on any number
of fronts.

> Excellent points Jim, all very relevant.
>
> I agree completely there are some things we have to do for ourselves. I
> think making art is one of these. I frequently think of this using the
> metaphor, as you state below, of eating. If you get someone or something
> else to eat for you, you haven't eaten.
>
> Also, if someone or something else makes a choice for you, you
> haven't made
> a choice. Making choices is required for learning, so to develop
> one must
> be free if within limits.

Again, the stress in your remarks is on computers replacing human action,
but I don't really think that's the most interesting or useful direction
people take it in art or business.

Though, you know, I'd personally like a money machine. You turn it on in the
morning and all it does all day is make money. And then you turn it off at
night. Maybe.

> As to a new intellectual era where we crack the brain's code, I remain
> skeptical. This would still not remove the tasks we have to work on
> ourselves to develop. There's a tough kernel of personal
> responsibility for
> aesthetic growth that no external machine or knowledge can take
> care of for
> us.

Yes!

> I may be misinterpreting, but even if we discover the brain's code and/or
> invent a machine to create wisdom synthetically and dump a copy
> of into us,
> we still won't have become wise–we'll have been replaced.

Well, I don't think we'll see wisdom packaged anytime soon. Probably, like,
never.

> Thus the issue is to do the work of wisdom and understanding,
> which people
> avoid because it's hard and scary, not because we lack the
> technical data of
> how to do it.

It's a learning process, isn't it, a process that one cannot see being
divorced from living and experience–learning from it.

> Or am I misreading you? Do you think knowing how the brain codes
> info will
> make people wiser? That could very well be. I think such technological
> solutions to human misery are all to the good. But they're not
> the same as
> human solutions, which I think are more important in the long run. After
> all, once we knew the brain code we could just as easily use it
> for "killing
> and moronics" as for understanding, correct? Again back to
> Wiener, making
> choices, morall judgment, aesthetic freedom, aesthetic
> evolution–not merely
> technological evolution.

If we can understand the way the brain processes information and stores and
retrieves memories, that might not make us wiser–I associate wisdom with
strong moral judgement–but it will certainly help us improve our own
information processing and memory storage and retrieval. That doesn't mean
it'll make us wiser, though, because, as you point out, that's a matter of
strong moral judgement, which is not simply the result of fine, healthy
physiology. Though, of course, fine, healthy physiology in matters
concerning the brain surely would be a help.

There's a lot of fear associated with the notion that we now basically have
the tools to understand–even reproduce–the processes of thought. As much
fear there as surrounded Darwin's notions of evolution and natural selection
150 years ago (and even today, in some parts, as we know). But, just as we
now understand that *we are not diminished* by the ideas of evolution and
natural selection, so too, I believe, will cultures come to realize that the
idea that our brains are information processing systems will come to be seen
not as something threatening that will lead to our being replaced by
machines. Instead, we will arrive at a deeper appreciation of human brains,
minds, and thought processes.

> Or from my essay on Groote and the Preface for this year's conference:
>
> As Wordsworth states in the 'Preface,

, Max Herman

Many interesting points Jim! Thanks for taking the time during my yearly
re-visit to the list!


>From: Jim Andrews <[email protected]>
>Reply-To: Jim Andrews <[email protected]>
>To: [email protected]
>Subject: RE: RHIZOME\_RAW: Technology, the Brain, Networkism, and Heroism
>Date: Fri, 14 Sep 2007 21:50:29 -0700
>
>Rob's post was good, as usual, I thought, in various things, but especially
>in stressing that what we see now and what we're going to see in the
>future,
>concerning computer art and other activities involving computers, is that
>they don't replace humans, usually, unless the activity is particularly
>well-suited to what computers do well. Instead, we see people using
>computers to help them do this or that. Or to do something in a very
>different way that may shed new light on various things, such as Rob's
>computer culinary connoisseur, which sounds like a lot of fun and
>interesting also. Very humorous and possibly also enlightening on any
>number
>of fronts.
>
> > Excellent points Jim, all very relevant.
> >
> > I agree completely there are some things we have to do for ourselves. I
> > think making art is one of these. I frequently think of this using the
> > metaphor, as you state below, of eating. If you get someone or
>something
> > else to eat for you, you haven't eaten.
> >
> > Also, if someone or something else makes a choice for you, you
> > haven't made
> > a choice. Making choices is required for learning, so to develop
> > one must
> > be free if within limits.
>
>Again, the stress in your remarks is on computers replacing human action,
>but I don't really think that's the most interesting or useful direction
>people take it in art or business.
>
>Though, you know, I'd personally like a money machine. You turn it on in
>the
>morning and all it does all day is make money. And then you turn it off at
>night. Maybe.
>
> > As to a new intellectual era where we crack the brain's code, I remain
> > skeptical. This would still not remove the tasks we have to work on
> > ourselves to develop. There's a tough kernel of personal
> > responsibility for
> > aesthetic growth that no external machine or knowledge can take
> > care of for
> > us.
>
>Yes!
>
> > I may be misinterpreting, but even if we discover the brain's code
>and/or
> > invent a machine to create wisdom synthetically and dump a copy
> > of into us,
> > we still won't have become wise–we'll have been replaced.
>
>Well, I don't think we'll see wisdom packaged anytime soon. Probably, like,
>never.
>
> > Thus the issue is to do the work of wisdom and understanding,
> > which people
> > avoid because it's hard and scary, not because we lack the
> > technical data of
> > how to do it.
>
>It's a learning process, isn't it, a process that one cannot see being
>divorced from living and experience–learning from it.
>
> > Or am I misreading you? Do you think knowing how the brain codes
> > info will
> > make people wiser? That could very well be. I think such technological
> > solutions to human misery are all to the good. But they're not
> > the same as
> > human solutions, which I think are more important in the long run.
>After
> > all, once we knew the brain code we could just as easily use it
> > for "killing
> > and moronics" as for understanding, correct? Again back to
> > Wiener, making
> > choices, morall judgment, aesthetic freedom, aesthetic
> > evolution–not merely
> > technological evolution.
>
>If we can understand the way the brain processes information and stores and
>retrieves memories, that might not make us wiser–I associate wisdom with
>strong moral judgement–but it will certainly help us improve our own
>information processing and memory storage and retrieval. That doesn't mean
>it'll make us wiser, though, because, as you point out, that's a matter of
>strong moral judgement, which is not simply the result of fine, healthy
>physiology. Though, of course, fine, healthy physiology in matters
>concerning the brain surely would be a help.
>
>There's a lot of fear associated with the notion that we now basically have
>the tools to understand–even reproduce–the processes of thought. As much
>fear there as surrounded Darwin's notions of evolution and natural
>selection
>150 years ago (and even today, in some parts, as we know). But, just as we
>now understand that *we are not diminished* by the ideas of evolution and
>natural selection, so too, I believe, will cultures come to realize that
>the
>idea that our brains are information processing systems will come to be
>seen
>not as something threatening that will lead to our being replaced by
>machines. Instead, we will arrive at a deeper appreciation of human brains,
>minds, and thought processes.
>
> > Or from my essay on Groote and the Preface for this year's conference:
> >
> > As Wordsworth states in the 'Preface,