Peter Stindberg handed me a hud to let me know when Linden office hours are in session. They are usually occuring when I’m not available, but today I had a half day of work. I arrived at Qarl Lindens office in which he told us in more ways than one that he couldn’t talk about Mesh’s without an NDA and that they would be made available in 2036. I emailed him asking for an agreement. Other then that, I asked him what he could talk about. His response was “nothing, apparently”. Most of the conversation centered around the weather and the location that he moved to. One participant was asking why Babbage Linden could give fine details about upcomming script changes where Qarl had his hands tied. Qarl’s response was that Babbage was in the core. Qarl also mentioned that spore allows you to export creatures. That sounds pretty amazing. They are also considering animated meshe’s. There are concerns about morph targets and the extra data needed for them. There is talk about rigged skeletons for avatars and eventually for prims as well. There is the possiblity of a physics constraint system in the works. It will not be like the original joints in the old days, but there is talk about hiearchical joints.
posted by Dedric Mauriac on Ambleside using a blogHUD : [blogHUD permalink]
When an image is uploaded, it is compressed to cut down on the overall file size. This helps to download images quicker for people who need to see that image on a prim. The problem that I have with this is that I do not have control over this compression for large images. As more compression is applied, the image losses more of its quality.
For smaller images, I am provided with a check box labeled, “Lossy”. I am either losing a lot of quality, or none at all. This check box for smaller images was made available after people complained about the lack quality of uploaded images being used for sculpties. Sculpties are prims that have their mesh defined by an image. Accuracy becomes a very high priority with sculptured images. Since sculpty images are essentially images of data, the original data is mutated through this compression.
Smaller images are fine now, but larger ones still have problems. “Lossy Compression“, as it is called, is originally meant to be able to allow you to reduce the file size of an image, but still retain the same quality of the original. Often a graphic artist will adjust this by eye until they can start to notice changes. Since the process is automated, I have no control over this.
Most large images are fine for this compression because they are photographs or contain gradients. These work best with Lossy Compression. However, when an image contains text or data, Lossy Compression becomes a nightmare. Imagine if you saved a text document of your thesis for school, uploaded it, and found that the schools website compressed it so that the words were changed into synonyms that didn’t make sense in the context given, or missing a letter here or there just to save space.
The reason behind my problems is just that. I tried in the past to automate uploading large image with RSS data. My attempt was to put 9 pictures on this image and clip the texture to show only 1 picture at a time, thus cutting down on the file size needed to download 9 individual images, and reduced download time swapping between each image. I had everything done. A bot was setup to pull the most recent articles from an RSS feed, compose them into an image, and upload them. The problem came with the quality of the images themselves after being uploaded. The text was very hard to read, and appeared as if it hadn’t finished downloading yet.
This brings me to my problem from this weekend. I tried to make a progress bar consisting of 100 images within one large image. Rather than have the end-user download 100 individual images as a task progressed, I would show one portion of a larger image. It was a tedious process to create each individual frame to represent the progress. In the end, I had converted it to a GIF animation to verify everything had looked fine before uploading the grid of images. I saved my image as a bitmap file, so that their would be no question on my end that I may have performed some compression before uploading. Even the image preview looked fine. When I went to use the image in-world, I could not make out any of the text in the image. I did wait until the “Loading…” text disappeared in the texture under the texture tab.
I found that when I saved the image back to my hard drive, the image then appeared crisp in second life, and the image on my local hard disc was similar to the original. I could still make out little bits of differences due to compression. This is starting to lead me to believe that the Second Life servers are storing something close to the original images, but choosing to send a highly compressed image instead. This becomes a problem, because people are not going to be looking at my prims, and then saving a copy of the image displayed to their local hard drive in order to see it clearly.
When I first started writing about this, I thought it would be great if we could choose the amount of compression that is applied to our uploaded images. However, it seems that compression may be done on the fly in the back, each time an image is requested. I’m scratching my head over this one. I simply wanted to be able to display a nice graphical progress bar, but the constraints are getting in the way. Why does the Second Life viewer state that a texture is done loading, when it is not? Is it still downloading? Why is the quality so horrible until you save it to your local hard drive?
I’m starting to look into making clothing as a possible stream of new income in Second Life. The benefits are that people do not need to own land in order to use the product. My other products (gadgets) often require land with available prim space in order to work.
I started searching for swim suite/club models to work with photorealistic textures. I wanted to find something tight-fitting that would not need additional prims to look good (such as a dress, boots, frilly shirts, etc.). I found a One Piece Rio Swim Suit at Liquid Vinyl Clothing that may do the trick. I choose it because it had additional holes that allowed me to see visual markers to help with the mapping to a model.
The original templates provided by Linden Lab for creating clothing and skins were horrible. Today, they offer better templates provided by Chip Midnight. I hadn’t realized that they updated their templates and went with the Avatar UV templates by Robin Wood. I had used the templates a few times in the past and was comfortable with how much detail and help they offered along with many other Second Life Tutorials by Robin Wood. I was familiar with Robin Woods artwork outside of Second Life. I often use one of my favorite tarot card decks, the Robin Wood Tarot.
Using Adobe Photoshop CS3 Extended, I was able to start morphing the swim suite model to cover different points of the UV maps. Once a texture is mapped, designers often had to attempt an upload in the past to see a preview to determine if the clothing appeared correct. There were rumors that you could use Poser to load up an avatar mesh and preview the textures. I purchased the program, but was a bit confused with the setup to even attempt to load my own mesh and map textures. Other folks who design software have created tools to help speed up this preview/creation process without the use of Second Life until the final texture is ready. The first that I had found in the past was the freely available SL Clothes Previewer. I wasn’t able to find the software at the original site or on my network storage device, so I started looking at more options. (Update: Found a link to the original files on TATS blog on the post, SL Clolthes Previewer).
The next item to help out designers is AvPainter. The AvPainter software lets you not only preview textures, but also allows you to paint directly onto the model. Their is a free demo version that prevents you from saving – but it’s enough to give the same functionality (if not more) as the SL Clothes Previewer. Drawing directly on the model was very helpful with being able to not only see where I went wrong, but to start making corrections.
The software also lets you use layering for each part of the clothing. I had a skin layer, a UV map layer, and then the actual swimsuite layer. I was able to draw on the swimsuite without affecting the other layers. A tablet is a must-have for this software. I personally find the pen to be much better when working in 3D. The addition of pen sensitivity in the image also gives an added benefit.
AvPainter comes with a default UV Mesh as the base. It’s great for seeing the mapped parts, but horrible for getting an idea of what body parts are where. I started hunting for skin to go under the swim suite. I found a post by Vint Falken about free full perm female skin textures by Eloh Eliot . Eloh Eliot posted many different skins as PSD files with many layers showing how the skins are built up. I found that loading up the PSD in AvPainter with all the layers started having an effect on memory. I flattened all of the skin layers so that the PSD eventually only had 3 layers. Skin, UV Map (15% opacity) and Swimwear. It worked perfectly. I could visually see how the clothing would appear on a fully skinned model with a hint of UV mapping.
Although you can smudge the image in AvPainter, it leaves much to be desired in the realm of moving the mesh to prevent smudging. I had to keep going back to photoshop to stretch/distort/warp/liquify the image a little each time and then come back to the AvPainter. I may even have to go back and work with the Morpheus Photo Warper a bit to help with the morphing as well. However, I’ve had trouble in the past with it since it is not originally meant to morph images in this way. It is often used to morph one image into another; not to morph the mesh of an existing image.
At first glance, I showed my wife and she was amazed at what I had done in a couple of hours. Then the critic in me started pointing out the problems to her. Shapes did not appear correct. Holes that appeared as ovals on the original model started to look egg shaped or too circular in my version. The back of the model was not showing enough detail for me to map. The left side didn’t map well either and I had to duplicate and flip the right side of her, giving an odd mirroring effect. Clasps sat against the skin which would eventually require prims on models with large chests.
I suppose it is a good first start, but it leaves much to be desired. The optimal model would offer a front, side, and back view strait on with hands stretched out to the side. It would be easier to map the photos to the UV templates. However, I have never seen any models like this in photographs. They are often at an angle, and only sometimes show the back. The lighting often changes for the back because the camera man is usually in the same area where the model simply turned around. Even better would be if the model was wearing a catsuit of an avatar mesh under the clothing. I can’t have everything.
I was browsing over at the SL Developers community site. I came across a file that Cristiano Midnight posted called SL Clothes Previewer. This software, created by Johan Durant, allows you to preview your clothing and skin textures on an avatar mesh.
You can do this with with the Second Life client, but the preview window is very tiny. Also, you risk uploading by mistake for 10 L$ if you don’t pay attention to what you do. What I like is that you can work on textures during any downtime that SL incurs and still see how they will look.
I found it pretty simple to use. Click the button for each layer and choose an image file. It also supports transparency with TGA files.
Johan Durant asks that if you like the tool, to visit there store, The Motion Merchant, or send a few L$ as a thank you. The store offers a lot of animations. One animation that caught my eye was the trick-or-treat bag. This one is a freebie for halloween, so you better come quick and get your own. It lets you walk up to someone and say “Trick or Treat!”. There character is animated to put candy in the bag that you hold out.