Phadroid and The Human Experience – Catching The Ghost

October 19th, 2011 | Posted by paul in Visionary Art - (Comments Off)

 

Andrew Jones: Their physical form can be almost a vehical…There are momement when a dancer can traing themselves to fully let go of themselves and thier ego and their identity and become this canvas where this ghost comes in and takes over.

Phaedra Jones: I believe there is a difference between technique and the process of cretivity or spirituality, because that process creates itself. Whereas technique or structure is created by the mind… there is huge difference between the state of mind, and the state of no-mind. If humanity is going anywhere further than where it is now, I believe it can recieve more information from states of no-mind than states of the known. And that’s what we are trying to explore together. It takes a lot of courage because we are so used to being in control. When you don’t know where you are going, you can go to places you cannot even comprehend.

Visit: The Human Experience

facebooktwittergoogle_plusredditpinterestlinkedinmail

Reality 3.0

September 10th, 2003 | Posted by paul in Uncategorized - (Comments Off)

A friend of mine has taken an interest in my book idea, which up and until now has been put way on the back burner. Her interest sparked a discussion and my desire to share a portion of my working notes.

A central thesis of the book is the idea of customized mediation. Think beyond Virtual Reality, even beyond currently conceived implications of Augmented Reality.

Think:

Sense Editors and Processors. Except in this case it’s not an accessorized tool, but an integrated part of your evolving neural-net. Customized synaesthesia becomes an operating ground states for further experience.

While we are at it, think of Mood Editors and Processors. Think of the optimized sane and loving states experienced by Ecstasy users without any degradation or loss of signal. As the metabolic mechanisms responsible for such states are fully realized, the ecstasy experience becomes an operating ground state in which to explore and experience states of even further optimized sanity and pleasure.

Now imagine that each average person, by this time fully post-biological, has ready access to 1030 ops/sec computation. Assuming anything close to Moore’s Law holds, such power could be available by 2060. Based on calculations of several individuals not referenced here, 1030 ops/sec is more than sufficient to fully simulate the entire biosphere circa 2000 including all sentient lifeforms internal experience as conscious beings. Not that this would necessarily be done, only that post-biological beings would be able to experience all of the realities currently accessible to the human race in complete detail, down to the wind and dust particle one encounters on a beautiful hike through the rollinh hills. They could experience the earth in all its detail and even customize it in any way imaginable. They could choose to experience this simulated earth by flying and floating through it like one does in really good flying dreams. Imagine the joy of flying around the earth and taking in all the sights, sounds and smells as real or more real than anything we have ever experienced? With fully customizable sense and mood editors, such experience could be in just about any flavor you wanted – down to such subtleties as that blissfully pleasant warmth you get when you’re really enjoying the outdoors and smelling beautiful spring flowers and getting a constant full body massage while flying with only enough wind in your face to make it maximally pleasurable.

Ok, so where does mediation come in? Well imagine that these hyper-realistic and fully customizable realties can act as mediators with what is going on outside of your internal reality. Neural-net algorithms could eventually evolve and adapt such that experiences in this “fictional” earth are mediated experiences of what’s actually going on in other places in the “real” post-biological universe. Maybe a friend who is joining you on these flying trips through simulated earths is actually experiencing something totally different, and in their reality you are doing completely different things as mediated through their “editors”.

Imagine that your flying around this simulated and hyper-realistic world is one way your neural net has adapted to navigating and plotting interstellar trajectories. The source of your mediated realities would be the entire sensory network of this post-biological network. Lets call this network “Cosm”.

Right now, we are limited by our bodily sense organs and to a lesser and more indirect way the sense input from microscopes and telescopes, etc. But in Cosm, its entire network of sensory organs become your own senses to experiencing the outside universe. In other words, a node in this Cosm’s SenseNet becomes your eyes and ears, smell and touch, more so than your own sense now. But better still we can choose how such sensory imput looks and feels and with what intensity. For example, imagine experiencing new astrophysical data mediated into correlated modalities of sensual massage.

Imagine flying through space as if you were a giant 100,000 km in size and putting your hand (virtual) inside of Jupiter and feeling the temperature gradients. Except in this case absolute zero is experienced as 0 celsius and 10,000 degrees as hot bath water. You could even have Jupiter disperse as if your hand was really that big and it disrupted its composition. Of course this would be simulated based on updated data, rather than actually happening!

I’ll leave you now with just the follwing salient points to ponder. These augmented and mediated realities will replace the very limited way we experience reality now. The idea of a universe experienced objectively will be seen as the illusion that’s its always been. Here is a funny question, are we upgrading ourselves or reality? “Hey Joe, your still experiencing Reality v3.1, you should upgrade to Reality v4.0 with its new expanded sensory editor!” Ironically each of us as mere biological humans mediates reality through our own neural-sensory channels now, only with a lot less fidelity and flexability. When the inevitable power of simulation and mediation exceeds our own senses, it’s only logical that we’d want to ditch our current senses (i.e. Reality v0.2 (still in beta)) and upgrade to better ones.

Eventually the amount of reality we create ourselves will far exceed the amount of reality being explored and discovered in the external universe. The quantity and quality of the noosphere will expand faster and grow more complex than the known physical universe. It’s my guess, that the highlight of any experience will be the interaction of one intelligent being to another, and so if we discover/contact another higher intelligence, it will be a spectacularly unique event.

FOLLOW UP: Reality 3.0: Hypermediation and Paradise Engineering

facebooktwittergoogle_plusredditpinterestlinkedinmail

Affective Glogging: Mood Mediated Real-Time Blogging

January 5th, 2003 | Posted by paul in Uncategorized - (Comments Off)

A wrote a short piece a couple of weeks ago called Beyond Blogging: Real-Time Intimate Reality. Stephan Mann has just published a piece about quiki glogs or real-time unmediated CyborgLogs (cyborglogs or “glogs“). I suspect however, that the majority of us who become gloggers will want to be more selective in what we broadcast. The first problem that occurs to me is that you’ll quickly grow tired of deciding when to broadcast and when not to. The trick will be to automate your casting preferences, perhaps have it learn through sophisticated adaptive algrothims, fuzzy logic, and user feedback.

Combine affective computing (computers being able to read your emotions) with other forms of mediated glogging (real-time streaming), and you get affective glogging.

With affective adaptive glogging you will be able to stream your life in a fuzzy fashion based on a web-of-trust (aka friendnet). This way your real-time stream to the outside world will be broadcast selectively (mediated) based on your GPS location, mood, and whim.

For example, when you are feeling sad, you could program your computer to only broadcast to your most trusted friends, or when you are feeling highly creative and energized you might want to stream your happenings to a wider audience for interaction and feedback. And of course you’ll probably want to selectively broadcast based on your GPS location, so that your friends might know you are home and available, but everyone else sees you as offline – at least visually.

facebooktwittergoogle_plusredditpinterestlinkedinmail

Blog Mapping

December 22nd, 2002 | Posted by paul in Uncategorized - (Comments Off)

In the growing trend of merging cyberspace with physical space, Blogmapper facilitates the connection between published blog entries and specific locations. Zooming in on a specific location or site and you’ll be able to see an annotated or blogged history of everyone who has published information about it. As wearable, ubiquitous computing becomes pervasive, every location will become increasingly augmented and annotated (text, audio, video), providing a rich context of historical, personal, emotional, social and commercial connections and perspectives.

facebooktwittergoogle_plusredditpinterestlinkedinmail

Beyond Blogging: Real-Time Intimate Reality

December 20th, 2002 | Posted by paul in Uncategorized - (Comments Off)

Imagine a time in the near future when almost everyone has a real-time always-on connection to the Net via ubiquitous wearable “augmented reality” devices. As part of this package, made possible with advance miniaturized heads-up displays, video cameras, location aware devices, GPS, swarmbots, emotion-sensitive and adaptive algorithms (i.e. Affective Computing), and sophisticated reputation systems, you are able to surf an augmented version of reality itself in real time.

Lets break this down. You would be able to, in real-time see precisely whats going on anywhere in the globe by jacking in to the collection of real-time video blogs. As part of this collection, sophisticated 3-D rendering engines would be able to take the collective video footage and extropolate a real-time VR scene, allowing you to transcend the viewing angle of any single camera. Better still, you could jack in to that part of the world from a variety of, not only physical perspectives, but political, intellectual, and emotional as well based on whatever any individual user makes public as part their unique sliding-scale trust system such as the type that Joi Ito has proposed with moblogs. All of this meta-data would form its own collective smart-mob based on individually selected criteria.

What this means is that you could then view the “scene” from virtually any angle. Imagine the possibility here. Some spontaneous news event occurs, and almost instantly as hundreds of people appear on the scene with wearable video cameras broadcasting on the net, you would be able to view this real-time scene from any angle, while simultaneously gaining the collective emotional assesment of the situation from those people choosing to broadcast their emotional indices, as well as the blogging that will invariable start occuring at rapid pace from your customized reputation/trust criteria.
All of this combines to gives you a real-time augmented, yet customized view of real-time reality. One that is rich in social and emotional context, providing and extending intimacy by empowering you to feel and touch the whole world.

facebooktwittergoogle_plusredditpinterestlinkedinmail

Headmap: Location Aware Devices

December 5th, 2002 | Posted by paul in Uncategorized - (Comments Off)

I just discovered Headmap.org, and it’s pure technology-fueled counter-culture at its best. From their website:

location aware devices . nomads . mapping sex . future architecture [life without buildings] . human geography . hyp(g)nosis . esoteric energy intelligence . community schisms . waypoints . psychogeography . community and spatial interfaces.

facebooktwittergoogle_plusredditpinterestlinkedinmail