Archive for the ‘design’ Category
I have recently learned about a small non-profit that is working to deliver ICT support to rural educators in Nepal.
That’s right, Information & Communication Technology in the villages and classrooms of the Himalayas.
- is self-powered (solar-rechargeable batter)
- is operated with a wireless “wand”
- has a built-in audio system
- comes loaded with CC-Licensed content (games, videos, songs, etc.)
The device projects the “desktop” onto a wall and comes with a hand-held mouse (the “wand”) to navigate. The prototype has been field tested and now they are looking for volunteers to help search for—and evaluate—content that can be loaded into the drive. (Most classrooms in rural Nepal have no electricity, much less an Internet connection.)
I would write a bit more; but, I am up to my elbows producing some educational content…gotta go. Holler at “azwaldo” at gmail dot com, anytime.
Found another problem with the TouchMeObject script (wrote about it in last post). New, improved version is posted (same URL). The problem was of my own making, a result of trying to avoid having multiple listeners created. I hope this update finds anyone that might have used the earlier version.
A listener creates a significant load in a simulator, enough to warrant caution in their creation. So, it is best practice to avoid having static (persistent) listeners in a simulator if users are not interacting with the object.
In preparing the TouchMeObject, I hastily threw in a “BUSY” variable (BOOLEAN) to clamp down listener creation while the object attended to a user (make second, third users wait ’til first user is done). But, I FAILED to provide any response to the next user to touch the object.
(This could be tested by two users, or one user with an alt. Touch object with first avatar, then touch with second user before the first responds to their Dialog prompt. The object does not respond.)
The problem in the first TouchMeObject script is obvious now, one I should have noticed; had just never gone the “BUSY” route before. My old, lazy approach has typically been to create a listener, wait for X seconds of inactivity, and then remove the listener.
Not having looked at this “concurrency” issue in a while, at least as it relates to use of the llDialog function, I had a look around. Seems that how best to deal with multiple listeners is still a matter of discussion…and by much more advanced scripters*.
Still, the digging paid off: I noticed that the SL Wiki page for the llListen function states:
“…handles are assigned sequentially starting at 1”
Light bulb attachment flickers, and an old lazy approach is upgraded to a newer lazy approach. Now, we’ll create a listener only if we have ZERO stored as the EARS_OPEN handle (integer variable, now acting as BOOLEAN). (Just have to remember to set it to ZERO everywhere that counts.)
But wait! What if a user touches the object and gets the dialog prompt, then gets distracted by cute kitty pictures in a browser, and returns to the dialog prompt in (X + 1) seconds?
Answer: No joy.
It aint elegant. It’s not even satisfying; and, it has its problems. But…
It’s something I can live with.
*UPDATE* (added single line to clear any float text if not assigned)
if (DISPLAY_FLOATING_TEXT) llSetText( llGetObjectName(), FLOAT_TEXT_COLOR, 1.0 );
else llSetText( “”, FLOAT_TEXT_COLOR, 1.0 );
//SEE WHAT’S BEEN ADDED TO CONTENTS
* To read some of the various issues related to the use of listeners, see:
First CC-licensed script is now completed for the 2014 OpenSimulator Community Conference. This is part of an activity that finds more collaboration in two days than most previous projects saw…ever!
The TouchMeObject script is meant to ease the set-up of a simple “giver” object. Add to a sign or poster or kiosk, whatever…then drag items from Inventory that are to be delivered when user touches the object. Script detects that change and commits those items as gifts. Several behaviors are managed just by editing variables.
Instructions at each step.
Take it for a spin if you’d like, see if it works…holler with any feedback. Please distribute willy-nilly.
I wonder: Do folks still write example scripts like this…commented to the teeth to help new scripters sort things out? (Especially in OpenSim where users seem to know what they’re doing.)
Update: Demo stays for a while.
I have designed a new tool, and now invite you to try it out.
At last year’s VWBPE conference (previous post) I wanted to give visitors a quick, customized tour of a design I was presenting…even when I was AFK.
Demonstration vendor in
my parcel, in Urdu
This “Site Preview HUD”
- combines scripted camera movement with audio narration
- is “touch to wear”
- is temporary, nothing is added to Inventory
- quickly shows the points of interest in a region or build
I am not selling this object.
This is not an advertisement.
This effect is new to me; so, it may be new to others, as well. I would be happy to share full-perm copies with the right users. (The hard part is creating .wav files, setting camera coordinates.)
You can find* it here: SLURL
There is also a Notecard at the demo location. Please share that—or this link—with others.
One of my earliest design gigs in virtual worlds was the development of a HUD* used by students learning the Chinese language. After four or five years, that design is still in use. The image below is from the Chinese Island simulation.
* Heads Up Display – an interactive display with buttons and text that mediates their interaction with the virtual environment.
Note the blue dialog prompt, and the HUD in upper and left perimeters.
Early next year, a group of Monash University students will enter the virtual world of SecondLife™ to experience a variety of simulations; a restaurant, an airport, a medical clinic and a train station. Later, they will actually travel to Italy for a program of study, abroad.
The virtual environment in which they will immerse themselves is modeled on the neighborhood in Italy where they will be staying. The simulations are designed to prepare them for their visit. They will study maps, use currency, become familiar with local fixtures…like signs.
In support of the Italian Studies project, I am developing interactive objects—mainly the scripts—to provide a number of interactions. Students can open a “wallet” at the “ATM” and withdraw virtual currency, then visit a coffee shop and…maybe purchase a cappucino. On touching some of the things they see (think “mouse click”), the name of that object appears as text in Italian and they hear an audio-stream pronunciation of the term.
They will be required to buy tickets, read a public transit schedule, and complete many other tasks during their lessons.
My mother and I did something similar before our visit to New York City. After opening Google Earth and “roaming” the virtual streets around our hotel to prepare for our trip, we were able to navigate that neighborhood as though we had been there before.
So, thanks Mom…for helping field test this sort of technology.
The new LEA project has reached its first critical juncture. Documents are there, pared, and shared; notecard invitations to group collaboration passed about liberally; tools for communicating on site have been deployed. Land is claimed, and
…a few rough sketches now dot the landscape or hang in mid-air, waiting for what comes next: the one question which must be answered before much else happens…
What is our objective?
At the beginning of each year as a science teacher I evaluated my classroom curriculum, rearranged topics and re-prioritized lessons; I shuffled the deck. Often a science department, district committee, or state board would hand down a new set of curriculum guidelines. This usually meant simply identifying what items in the new list I was already addressing.
Nothing to see here, folks; move along.
But then, every few years, the federal government, scientific and—let’s face it—corporate communities decide to crumple up the old list, toss it in a basket, and start from scratch. With the release of new science education standards in April, the National Academies of Science have endorsed a new deal.
They’ve called for a new deck.
I have typically been pleased to see the changes in focus, the new language for science learning that comes with new national standards or guidelines. This round is no exception.
It is worth mention that these new standards are not a mandate, are not supported by all states. Many states will never recognize their merit, and others will take years to implement through adoption and articulation. With science education curriculum guidelines, there actually is no such thing as a national standard. That is just what some of us call them, out of convenience.
I also know that where the rubber hits the road is in each teacher, department, or curriculum committee’s interpretation of such standards. Every lesson is one person’s spin on what was prescribed. This applies to content providers, too. Folks who make textbooks, for example, are jumping on these standards like they are putting out a fire. I have seen it. And, different users interpret standards differently.
This also applies to the design of The Virtual Cell. Where we go with this new compass we have been given is up to us. What we do with the full region granted for this demonstration follows from our own interpretation of those same standards.
The discussion has begun regarding how to address standards, how to provide support for classroom instruction that is targeted and effective yet still wide-ranging in its application. After all, “if it doesn’t address my state’s guidelines, I cannot use it”.
Yet, one size will never fit all. While chatting at a recent conference exhibit of an activity for new users, one educator observed that there should be more notecards (with instructions). I had heard this same comment once already, just before the event. Later, the next day, another visitor observed “there are too many notecards.” I just heard that very same comment again, for the very same design, yesterday.
They are all correct, of course. There are too many notecards, and…we need more notecards. It should be black; and, it really should be white. You just have to “remember who your audience is.”
To emphasize a point and begin making the case for a particular design approach, I must mangle a maxim:
You can please all of the people with some of the content.
You can please some of the people with all of the content.
But, you can never please all of the people with all of the content.
With three months to build an interactive, standards-based, highly engaging and interesting activity—with three months to make upwards of three to five hundred lesser decisions (best guess, conservatively)—with three months to organize a collaborative team willing to offer their work free of charge in the interest of helping to further demonstrate that virtual worlds really do have a place in the classroom…this issue needs to be resolved quickly.
A number of performance indicators in the new standards are obviously ripe for a virtual world experience teaching about the cell. And, it is just as obvious that one could quickly bite off more than one can chew, if you look at the list. With three months to build, the question becomes “What might we achieve?”
But, to digress for a moment, what we might achieve depends on who is pitching in…even if only offering 2¢. For this project to reach its potential, if the build even begins to approach what I try to imagine, any number of experienced—dare I say, expert—content creators will have played their hand.
- a wizard has conjured a vehicle,
- several members of one group of biologists have expressed an interest,
- a SecondLife™ entrepreneur has offered to make introductions to various said experts, and
- a fantastical feline has been purring about some pretty proper prims.
So, to table the “standards” conversation for a moment, I’ll ask an even more practical question. It looks like it’s my deal…
Among the many comments and questions, criticisms and suggestions received at the recent conference exhibit, the most striking were those occurring in complete opposition to others. An example is one user’s suggestion that more instructional notecards would be useful, where another user had observed just the day before that there were too many notecards.
Other conflicting comments (not just from the conference) have included that the activity should—and should not—address flight, media, and communicating via IM/chat.
There is also a fine line between providing sufficient directions and totally overwhelming the user with a tedious series of walk-and-stop-to-read, walk-and-read, walk-and-stop-to-read stations. Assembling a demo version of the activity for the conference provided an opportunity to experiment with this issue. Several modules were chosen for the event due to their level of completion. They happened to all have notecard or floating text giving instruction, primarily; so, a quick fix was needed to balance the forms of delivery.
A couple of new info-graphics were created just for the demo; this helped to spread instructional information across the various modes (notecards, public chat, dialog prompts and floating text, as well as infographics).
These two issues (different needs of end-users, varying the form of instructional text delivery) point to a challenge in trying to create a single tool that meets the needs of a large number of use cases. This was never part of the plan with the BSG prototype. Rather, a demonstration of the concept was pursued with a range of user interface features being addressed. The entire system is presented in a modular system that can be deployed in a variety of configurations and with any number of thematic “skins” applied.
Any out-of-the-box design would have to be compromised in too many ways for this user to find it useful. As an open source project, we are already employing many least common denominators…across the build.
I would be interested in reading your comments on this.