Jobs for animators

All the 70% UniqueRobots visitors got jobs in international companies and we proudly say that uniquerobots had the specatacular content about animation and animation schools and also about animation courses

Robotics

Here Unique means unique="Differnt" and Robots="Robots" here we have a perfect view on robots and we proudly say that main theme of this Uniquerobots site is about robots and here our vistiors know the all types of robots and about future technology

Human psycology

uniquerobots has a wonderful content on human psycology and about how that has to lead his own life and also has wonderful premium e-books for free download and had a lots of tips and techninques on how to learn a man

Future Vehicls

uniquerobots proudly say that we got award on best aritcls on future cars and vehicls and we honestly say that uniquerobots cross the 1crore visitors

Nasa projects

All the upcoming projects of nasa will be first announce only on Uniquerobots. just check out uniquerobots for all future nasa projects and latest Techonlogy news on uniqerobots

welcome to the futute tech planet

Video Phone Concept with Unique Keypad 2

Robo tech | 1:00 AM | ,


The challenge of the design brief was to create a design solution that enabled greater and easier levels of user interaction. The easy to understand keypad combined with the hand-writing recognition technology provides great information ergonomics for the user. Another of the video phone key selling points is its dual mode functionality. The phones keypad can be locked into place to provide a sturdy stand/rest to allow users to sit back enjoy the video calling experience.
sony ericsson video phone concept
sony ericsson video phone concept
sony ericsson video phone concept

Sony Ericsson Premiere 3 Walkman Phone by KDDI 0

Robo tech | 12:59 AM |


The cell phone market over the years has turned out to be a segment with the least shelf life, thereby a constant process for manufacturers to come up with new models. Today cell phone is not just an instrument to make and receive calls but far beyond that. The new walkman phone Premiere 3 by KDDI for Sony Ericsson, slated for spring release is a powerhouse so to say. It features a 3″ VGA screen, auto focus camera, and a memory of 2GB and weighing a meager 113gms, and quite compact in size. The colors are quite vibrant and with video in demand feature and also remote control for walkman, it sure is a model to look out for. There’s no info on pricing yet.
sony ericsson premiere 3 walkman phone
sony ericsson premiere 3 walkman phone
sony ericsson premiere 3 walkman phone
sony ericsson premiere 3 walkman phone
sony ericsson premiere 3 walkman phone
sony ericsson premiere 3 walkman phone
sony ericsson premiere 3 walkman phone

Sony Ericsson Jalou Cell Phone by Dolce and Gabbana 0

Robo tech | 12:49 AM |


The Jalou phone is the outcome of the latest partnership between designers Dolce and Gabbana and Sony Ericsson which features latest technology along with fashion statement to make others green with desire as well. The unique design of this phone was inspired by the multiple surfaces of a cushion-cut gemstone. The features that must be talked about this phone are its efficient 3.2 megapixel camera that can tag photos with their geographic position, a 2-inch high resolution display, stereo Bluetooth and a built-in-mirror, all these are packed into its compact clamshell which is as small as a lipstick box. Normal versions of this phone are already available in deep amethyst, onyx black and aquamarine shades, and the special D&G edition will be available in Sparkling Rose with 24-karat gold plating along with a special wireless headset.
jalou sony ericsson by dolce and gabbana
jalou sony ericsson by dolce and gabbana
jalou sony ericsson by dolce and gabbana
jalou sony ericsson by dolce and gabbana
jalou sony ericsson by dolce and gabbana
jalou sony ericsson by dolce and gabbana
jalou sony ericsson by dolce and gabbana

Sony Ericsson XPERIA X2 with 8.1 Megapixel Camera is Lighter Than X1 0

Robo tech | 12:42 AM |


The Sony Ericsson XPERIA X2 with its 3.2” touchscreen has been officially unveiled after its sleek-looking predecessor X1. If you consider dimensions, the latest X2 is close to its previous version; however, it’s lighter than the X1. The X2 features a redesigned keypad just like that of a netbook and reveals by sliding out the upper portion. The 8.1 megapixel camera is one of the major upgrades of X2. The custom panel interfaces have been enriched with fourteen preloaded and sixteen more downloadable ones. Besides, it is the debut of Windows Mobile 6.5 operating system as default on a Sony Ericsson handset.
sony ericsson xperia2 cell phone
sony ericsson xperia2 cell phone

[Press Release]
Live life in the fast lane with the XPERIA™ X2 from Sony Ericsson. Designed for those who always need to be connected, whether it’s for business or personal life, the XPERIA™ X2 blurs the boundary between work and play.
London, UK – September 2, 2009 – Today Sony Ericsson announces the XPERIA™ X2 a new Windows® phone that offers a best in class email and multimedia experience. In the modern world where 24/7 communication is key, users can instantly synchronise their mail and calendar and open and edit Microsoft® Office Mobile documents quickly and efficiently to stay connected with colleagues wherever they are.
The XPERIA™ X2 also includes the unique SlideView feature, which provides quick access to frequently used phone activities. Providing quick interaction with contacts, messages, media and more, SlideView gives an overview of missed incoming activity, notifying the user of any missed calls, e-mails and text messages so users don’t overlook an important contact.
sony ericsson xperia2 cell phone
With 14 specially designed preloaded XPERIA™ panels and 16 more to download, users can work with no boundaries with the XPERIA™ X2. From Skype, Mytopia and Google™, to games, CNN and Windows Live™, the panels ensure users are up-to-date with what matters most to them. With an improved touch interface and new 3D signature panel users also benefit from flexible desktop panels designed to categorise business, fun and communication features. Just set favourites to appear during certain times of the day and can get the latest news in the morning, YouTube™ at lunchtime and games for the journey home.
Powered by QWERTY messaging, Windows Mobile® users can even show their presentations on the big screen with the TV out cable – the XPERIA™ X2 makes a day at the office a walk in the park.
“In the fast moving world we live in, the need to stay connected has never been so important.” said Sumit Malhotra, marketing business manager, Sony Ericsson. “We constantly rely on our mobile phones as an extension to the office and the XPERIA™ X2 debuting with Windows Mobile® 6.5, allows users to work quickly and efficiently while on the move. The XPERIA™ X2 also features a new range of interactive panels as well as SlideView, which provides quick access to frequently used phone activities – perfect for those who need to see any missed incoming activity at a glance.”
sony ericsson xperia2 cell phone
Entertainment is not compromised on XPERIA™ X2. Enjoy amazing multimedia with the 3.2” high resolution touch screen and DVD quality and take advantage of the 8.1 megapixel camera to capture and instantly share experiences with friends and family. Personalise the panels and users can access Facebook™ to upload their party or holiday images and they can chat with friends across the world via Skype. Whether it is music, photography, email, video or gaming the XPERIA™ X2 has it all.
“Windows® phones allow people to manage their whole world – from work to home to play – on a single handset,” said Stephanie Ferguson, general manager, product management, Microsoft Corp. “The XPERIA™ X2 taps the powerful messaging and multimedia capabilities in Windows Mobile® so customers can be in touch, productive and entertained wherever they are.”
With the need to stay connected 24/7 Sony Ericsson has designed XPERIA™ Services, a bespoke and unique after sales package designed to help XPERIA™ X2 users get the most out of their mobile phone. With a specialised technical team standing by to support busy users and talk through the outstanding features the XPERIA™ X2 has to offer users can get help from troubleshooting to how to access their favourite websites. And if their XPERIA™ X2 stops working while they are abroad, XPERIA™ Services can replace the mobile phone via a simple phone call.
XPERIA™ Services really has been devised with the consumer in mind and to help consumers discover more about their XPERIA™ X2.
sony ericsson xperia2 cell phone
Live life without boundaries
* Windows Mobile® 6.5 – work on the move easier
* Make the most of the day – flexible desktop panels categorised for life: communication and fun, multimedia, business and internet
* Work without boundaries – instant synchronisation of mail, calendar powered by QWERTY keyboard messaging and Windows Mobile®
* Slide view – quick access to frequently used phone activities and overview of missed incoming activity
* Present documents on the big screen – TV out cable
* Never get stranded – XPERIA™ exclusive travel insurance
* Enjoy amazing multimedia – 3.2” high resolution touch screen and DVD quality
* 8.1 mega pixel camera with Photo light – easily upload images to web albums
* Real 3D panel – 3D effects and zoom, music playback controls
Big business meeting? Arrive charged with the Car Charger AN300 – ultra fast, ultra-safe and ultra-reliable. A perfect accessory for the XPERIA™ X2, it charges 40 per cent faster than most car chargers – just plug into the cigarette lighter and go.
XPERIA™ X2 supports GSM/GPRS/EDGE 850/900/1800/1900 and UMTS/HSPA 850/900/2100. XPERIA™ X2 will be available in selected markets from early Q4 in the colours Elegant Black and Modern Silver.
sony ericsson xperia2 cell phone
sony ericsson xperia2 cell phone

Sony Ericsson Xperia Play in White 0

Robo tech | 12:37 AM |


Thanks to the increasing hardware and emergence of various mobile application stores such as Google’s Android Market, this has led to the increasing popularity of gaming on mobile phones. However, while the graphics appears to be nice on those giant touch screens, they aren’t the best when it comes to controlling the games. Fortunately, Sony Ericsson has heeded to the woes of all gamers and will unveil its new white Xperia Play. For some reason or the other, it appears that a vestal white is the trendy color for smart phones, which was an inspiration for creating the Sony Ericsson Xperia Play. This new shiny smart phone that hasn’t made its public appearance, will be unveiled soon to play in white. The device features a stereotypical smart phone design along with a slide-out game pad that makes the gamer handle the device with ease. There is no clue on how long this exclusivity would last; however, most are anticipating its appearance in the public arena.
Designer : Sony Ericsson
Sony Ericsson Xperia Play White
Sony Ericsson Xperia Play White

EAZ Disabled Mobility Device Is An Innovative Mobility Solution For Physically Disabled Person 0

Robo tech | 12:33 AM |


EAZ Disabled Mobility Device concept eliminates the discomfort and disregard of riding a wheelchair that usually physically disabled person experiences. The innovative mobility device actually is a combination of a wheelchair and a walker that revolutionizes the mobility solution of moderately disabled people. Not only disabled person, even the old community can stylishly rely on this concept as their personal transportation solution. This two wheeled mobility device features a self balancing mechanism where users can travel both on a standing and seating configuration which will give users the freedom of moving around.
Designer : Grayson Stopp
EAZ disabled=
EAZ disabled=
EAZ disabled=
EAZ disabled=
EAZ disabled=
EAZ disabled=

i-Gucci Grammy Edition by Frida Giannini 0

Robo tech | 12:29 AM |


Recognizing the joint venture of Gucci and Recording Academy, creative designer Frida Giannini has designed an exceptional collection of new i-Gucci GRAMMY watches. The Recording Academy is now launching GRAMMYS special edition collection of watches and jewelry collection, an exclusive fusion of style and music. Fans of cosmopolitan watches would find Gucci’s brilliantly designed timepiece, the i-Gucci quite eye-catching.
Offering the ultimate in versatility, the broad face of the watch, changes from a 2-time zone dial to a more streamlined version that has 2 discreet digital hands exposing local time. The watch’s double-layout digital display has a special label celebrating the GRAMMY partnership. Emphasizing the partnership, the watch’s stainless-steel case features GRAMMY Awards special edition label. To add a touch of glamor to this special edition model, the designer has outlined the watch face with sparkling diamonds. The gramophone, unique symbol of GRAMMY since its inception has been etched over the dog tag over a yellow 18-kt gold globe. Music lovers who wish to differentiate their style credentials would fine this model very appealing. A finely engraved Gucci logo finishes the overall design.
Designer : Frida Giannini
i-Gucci Watch Grammy Edition
i-Gucci Watch Grammy Edition
i-Gucci Watch Grammy Edition
i-Gucci Watch Grammy Edition
i-Gucci Watch Grammy Edition
i-Gucci Watch Grammy Edition
i-Gucci Watch Grammy Edition

HTC Slim by Sylvain Gerber 0

Robo tech | 12:26 AM |


Sylvain Gerber, an industrial designer, tried to design a slim smart phone for HTC. The sleek design gives this phone an elegant touch and futuristic look. It’s been designed with magnesium case and back plate in carbon fiber for quality and luxury. The 3 main buttons are big enough for user with big thumb to operate this smart phone. Even though it looks very stylish, this phone has been designed for business users with less multimedia facilities.
Designer : Sylvain Gerber
HTC Slim by Sylvain Gerber
HTC Slim by Sylvain Gerber
HTC Slim by Sylvain Gerber

MITRA Cylindrical Shaped Portable PC Contains A Built-In Projector For Convenient Presentation 0

Robo tech | 3:28 AM |


Whenever attending a meeting, I always have to carry a heavy laptop and an even heavier projector from my office to the venue, so I know very well how it really feels. Wish I could have a MITRA micro pc that features a compact and battery torch shaped look with great functional opportunities. This small PC contains a roll out LCD that actually is a set of solar panels that generates energy from the sunlight and stores the surplus energy in the onboard battery for emergency use. Also, it comprises hand crank charging facility which gives 30 minutes of use through rotating crank for 30 times only. The built-in LED projector is another unique feature of the PC concept that gives convenient presentation alternatives. This PC was specially designed for Indian rural areas where online teaching, online medical treatment or online help for rural farmers can have significant effect on the entire country’s development and MITRA is the most efficient alternative considering the country’s overall aspects.
Designer : Yogesh Kumar Baghel
mitra
mitra

mitra
Click the image below to see bigger version
mitra

3D ANIMATION 1

Robo tech | 11:50 PM |

3D computer graphics (in contrast to 2D computer graphics) are graphics that use a three-dimensional representation of geometric data (often Cartesian) that is stored in the computer for the purposes of performing calculations and rendering 2D images. Such images may be stored for viewing later or displayed in real-time.
Despite these differences, 3D computer graphics rely on many of the same algorithms as 2D computer vector graphics in the wire-frame model and 2D computer raster graphics in the final rendered display. In computer graphics software, the distinction between 2D and 3D is occasionally blurred; 2D applications may use 3D techniques to achieve effects such as lighting, and 3D may use 2D rendering techniques.
3D computer graphics are often referred to as 3D models. Apart from the rendered graphic, the model is contained within the graphical data file. However, there are differences. A 3D model is the mathematical representation of any three-dimensional object. A model is not technically a graphic until it is displayed. Due to 3D printing, 3D models are not confined to virtual space. A model can be displayed visually as a two-dimensional image through a process called 3D rendering, or used in non-graphical computer simulations and calculations.



History

William Fetter was credited with coining the term computer graphics in 1960[1][2] to describe his work at Boeing. One of the first displays of computer animation was Futureworld (1976), which included an animation of a human face and hand — produced by Ed Catmull and Fred Parke at the University of Utah.
[edit]Overview

The process of creating 3D computer graphics can be sequentially divided into three basic phases: 3D modeling which describes the process of forming the shape of an object, layout and animation which describes the motion and placement of objects within a scene, and 3D rendering which produces an image of an object.
[edit]Modeling


A 3D rendering with ray tracing and ambient occlusion using Blender and YafaRay
Main article: 3D modeling
The model describes the process of forming the shape of an object. The two most common sources of 3D models are those originated on the computer by an artist or engineer using some kind of 3D modeling tool, and those scanned into a computer from real-world objects. Models can also be produced procedurally or via physical simulation.
[edit]Layout and animation
Main article: Computer animation
Before objects are rendered, they must be placed (laid out) within a scene. This is what defines the spatial relationships between objects in a scene including location and size. Animation refers to the temporal description of an object, i.e., how it moves and deforms over time. Popular methods include keyframing, inverse kinematics, and motion capture, though many of these techniques are used in conjunction with each other. As with modeling, physical simulation is another way of specifying motion.
[edit]Rendering


During the 3D rendering step, the number of reflections “light rays” can take, as well as various other attributes, can be tailored to achieve a desired visual effect. Image created with Cobalt


A 3d model of a Dunkerque class battleship rendered with flat shading.
Main article: 3D rendering
Rendering converts a model into an image either by simulating light transport to get photorealistic images, or by applying some kind of style as in non-photorealistic rendering. The two basic operations in realistic rendering are transport (how much light gets from one place to another) and scattering (how surfaces interact with light). This step is usually performed using 3D computer graphics software or a 3D graphics API. The process of altering the scene into a suitable form for rendering also involves 3D projection which allows a three-dimensional image to be viewed in two dimensions.
[edit]Communities

There are a multitude of websites designed to help educate and support 3D graphic artists. Some are managed by software developers and content providers, but there are standalone sites as well. These communities allow for members to seek advice, post tutorials, provide product reviews or post examples of their own work.
[edit]Distinction from photorealistic 2D graphics

Not all computer graphics that appear 3D are based on a wireframe model. 2D computer graphics with 3D photorealistic effects are often achieved without wireframe modeling and are sometimes indistinguishable in the final form. Some graphic art software includes filters that can be applied to 2D vector graphics or 2D raster graphics on transparent layers. Visual artists may also copy or visualize 3D effects and manually render photorealistic effects without the use of filters. See also still life.[citation needed]

2D ANIMATION 0

Robo tech | 11:37 PM |

2D computer graphics is the computer-based generation of digital images—mostly from two-dimensional models (such as 2D geometric models, text, and digital images) and by techniques specific to them. The word may stand for the branch of computer science that comprises such techniques, or for the models themselves.

Raster graphic sprites (left) and masks (right)
2D computer graphics are mainly used in applications that were originally developed upon traditional printing and drawing technologies, such as typography, cartography, technical drawing, advertising, etc. In those applications, the two-dimensional image is not just a representation of a real-world object, but an independent artifact with added semantic value; two-dimensional models are therefore preferred, because they give more direct control of the image than 3D computer graphics (whose approach is more akin to photography than to typography).
In many domains, such as desktop publishing, engineering, and business, a description of a document based on 2D computer graphics techniques can be much smaller than the corresponding digital image—often by a factor of 1/1000 or more. This representation is also more flexible since it can be rendered at different resolutions to suit different output devices. For these reasons, documents and illustrations are often stored or transmitted as 2D graphic files.
2D computer graphics started in the 1950s, based on vector graphics devices. These were largely supplanted by raster-based devices in the following decades. The PostScript language and the X Window System protocol were landmark developments in the field.
2D graphics techniques

2D graphics models may combine geometric models (also called vector graphics), digital images (also called raster graphics), text to be typeset (defined by content, font style and size, color, position, and orientation), mathematical functions and equations, and more. These components can be modified and manipulated by two-dimensional geometric transformations such as translation, rotation, scaling. In object-oriented graphics, the image is described indirectly by an object endowed with a self-rendering method—a procedure which assigns colors to the image pixels by an arbitrary algorithm. Complex models can be built by combining simpler objects, in the paradigms of object-oriented programming.
[edit]Direct painting
A convenient way to create a complex image is to start with a blank "canvas" raster map (an array of pixels, also known as a bitmap) filled with some uniform background color and then "draw", "paint" or "paste" simple patches of color onto it, in an appropriate order. In particular, the canvas may be the frame buffer for a computer display.
Some programs will set the pixel colors directly, but most will rely on some 2D graphics library and/or the machine's graphics card, which usually implement the following operations:
paste a given image at a specified offset onto the canvas;
write a string of characters with a specified font, at a given position and angle;
paint a simple geometric shape, such as a triangle defined by three corners, or a circle with given center and radius;
draw a line segment, arc, or simple curve with a virtual pen of given width.
[edit]Extended color models
Text, shapes and lines are rendered with a client-specified color. Many libraries and cards provide color gradients, which are handy for the generation of smoothly-varying backgrounds, shadow effects, etc. (See also Gouraud shading). The pixel colors can also be taken from a texture, e.g. a digital image (thus emulating rub-on screentones and the fabled "checker paint" which used to be available only in cartoons).
Painting a pixel with a given color usually replaces its previous color. However, many systems support painting with transparent and translucent colors, which only modify the previous pixel values. The two colors may also be combined in fancier ways, e.g. by computing their bitwise exclusive or. This technique is known as inverting color or color inversion, and is often used in graphical user interfaces for highlighting, rubber-band drawing, and other volatile painting—since re-painting the same shapes with the same color will restore the original pixel values. correct
[edit]Layers
Main article: Layers (digital image editing)
The models used in 2D computer graphics usually do not provide for three-dimensional shapes, or three-dimensional optical phenomena such as lighting, shadows, reflection, refraction, etc. However, they usually can model multiple layers (conceptually of ink, paper, or film; opaque, translucent, or transparent—stacked in a specific order. The ordering is usually defined by a single number (the layer's depth, or distance from the viewer).
Layered models are sometimes called 2½-D computer graphics. They make it possible to mimic traditional drafting and printing techniques based on film and paper, such as cutting and pasting; and allow the user to edit any layer without affecting the others. For these reasons, they are used in most graphics editors. Layered models also allow better anti-aliasing of complex drawings and provide a sound model for certain techniques such as mitered joints and the even-odd rule.
Layered models are also used to allow the user to suppress unwanted information when viewing or printing a document, e.g. roads and/or railways from a map, certain process layers from an integrated circuit diagram, or hand annotations from a business letter.
In a layer-based model, the target image is produced by "painting" or "pasting" each layer, in order of decreasing depth, on the virtual canvas. Conceptually, each layer is first rendered on its own, yielding a digital image with the desired resolution which is then painted over the canvas, pixel by pixel. Fully transparent parts of a layer need not be rendered, of course. The rendering and painting may be done in parallel, i.e. each layer pixel may be painted on the canvas as soon as it is produced by the rendering procedure.
Layers that consist of complex geometric objects (such as text or polylines) may be broken down into simpler elements (characters or line segments, respectively), which are then painted as separate layers, in some order. However, this solution may create undesirable aliasing artifacts wherever two elements overlap the same pixel.
See also Portable Document Format#Layers.
[edit]2D graphics hardware

Modern computer graphics card displays almost overwhelmingly use raster techniques, dividing the screen into a rectangular grid of pixels, due to the relatively low cost of raster-based video hardware as compared with vector graphic hardware. Most graphic hardware has internal support for blitting operations and sprite drawing. A co-processor dedicated to blitting is known as a Blitter chip.
Classic 2D graphics chips of the late 1970s and early 1980s, used in the 8-bit video game consoles and home computers, include:
Atari's ANTIC (actually a 2D GPU), TIA, CTIA, and GTIA
Commodore/MOS Technology's VIC and VIC-II
[edit]2D graphics software

Many graphical user interfaces (GUIs), including Mac OS, Microsoft Windows, or the X Window System, are primarily based on 2D graphical concepts. Such software provides a visual environment for interacting with the computer, and commonly includes some form of window manager to aid the user in conceptually distinguishing between different applications. The user interface within individual software applications is typically 2D in nature as well, due in part to the fact that most common input devices, such as the mouse, are constrained to two dimensions of movement.
2D graphics are very important in the control peripherals such as printers, plotters, sheet cutting machines, etc. They were also used in most early video and computer games; and are still used for card and board games such as solitaire, chess, mahjongg, etc.
2D graphics editors or drawing programs are application-level software for the creation of images, diagrams and illustrations by direct manipulation (through the mouse, graphics tablet, or similar device) of 2D computer graphics primitives. These editors generally provide geometric primitives as well as digital images; and some even support procedural models. The illustration is usually represented internally as a layered model, often with a hierarchical structure to make editing more convenient. These editors generally output graphics files where the layers and primitives are separately preserved in their original form. MacDraw, introduced in 1984 with the Macintosh line of computers, was an early example of this class; recent examples are the commercial products Adobe Illustrator and CorelDRAW, and the free editors such as xfig or Inkscape. There are also many 2D graphics editors specialized for certain types of drawings such as electrical, electronic and VLSI diagrams, topographic maps, computer fonts, etc.
Image editors are specialized for the manipulation of digital images, mainly by means of free-hand drawing/painting and signal processing operations. They typically use a direct-painting paradigm, where the user controls virtual pens, brushes, and other free-hand artistic instruments to apply paint to a virtual canvas. Some image editors support a multiple-layer model; however, in order to support signal-processing operations like blurring each layer is normally represented as a digital image. Therefore, any geometric primitives that are provided by the editor are immediately converted to pixels and painted onto the canvas. The name raster graphics editor is sometimes used to contrast this approach to that of general editors which also handle vector graphics. One of the first popular image editors was Apple's MacPaint, companion to MacDraw. Modern examples are the free GIMP editor, and the commercial products Photoshop and Paint Shop Pro. This class too includes many specialized editors — for medicine, remote sensing, digital photography, etc



COMPUTER ANIMATION 0

Robo tech | 11:25 PM |

Computer animation is the process used for generating animated images by using computer graphics. The more general term computer generated imagery encompasses both static scenes and dynamic images, while computer animation only refers to moving images produced by exploiting the persistence of vision to make a series of images look animated. Given that images last for about one twenty-fifth of a second on the retina fast image replacement creates the illusion of movement.
Modern computer animation usually uses 3D computer graphics, although 2D computer graphics are still used for stylistic, low bandwidth, and faster real-time renderings. Sometimes the target of the animation is the computer itself, but sometimes the target is another medium, such as film.
Computer animation is essentially a digital successor to the art of stop motion animation of 3D models and frame-by-frame animation of 2D illustrations. Computer generated animations are more controllable than other more physically based processes, such as constructing miniatures for effects shots or hiring extras for crowd scenes, and because it allows the creation of images that would not be feasible using any other technology. It can also allow a single graphic artist to produce such content without the use of actors, expensive set pieces, or props.
To create the illusion of movement, an image is displayed on the computer screen and repeatedly replaced by a new image that is similar to the previous image, but advanced slightly in the time domain (usually at a rate of 24 or 30 frames/second). This technique is identical to how the illusion of movement is achieved with television and motion pictures.
For 3D animations, objects (models) are built on the computer monitor (modeled) and 3D figures are rigged with a virtual skeleton. For 2D figure animations, separate objects (illustrations) and separate transparent layers are used, with or without a virtual skeleton. Then the limbs, eyes, mouth, clothes, etc. of the figure are moved by the animator on key frames. The differences in appearance between key frames are automatically calculated by the computer in a process known as tweening or morphing. Finally, the animation is rendered.
For 3D animations, all frames must be rendered after modeling is complete. For 2D vector animations, the rendering process is the key frame illustration process, while tweened frames are rendered as needed. For pre-recorded presentations, the rendered frames are transferred to a different format or medium such as film or digital video. The frames may also be rendered in real time as they are presented to the end-user audience. Low bandwidth animations transmitted via the internet (e.g. 2D Flash, X3D) often use software on the end-users computer to render in real time as an alternative to streaming or pre-loaded high bandwidth animations.


A simple example


Computer animation example
The screen is blanked to a background color, such as black. Then, a goat is drawn on the right of the screen. Next, the screen is blanked, but the goat is re-drawn or duplicated slightly to the left of its original position. This process is repeated, each time moving the goat a bit to the left. If this process is repeated fast enough, the goat will appear to move smoothly to the left. This basic procedure is used for all moving pictures in films and television.
The moving goat is an example of shifting the location of an object. More complex transformations of object properties such as size, shape, lighting effects often require calculations and computer rendering instead of simple re-drawing or duplication.
[edit]Explanation

To trick the eye and brain into thinking they are seeing a smoothly moving object, the pictures should be drawn at around 12 frames per second (frame/s) or faster (a frame is one complete image). With rates above 70 frames/s no improvement in realism or smoothness is perceivable due to the way the eye and brain process images. At rates below 12 frame/s most people can detect jerkiness associated with the drawing of new images which detracts from the illusion of realistic movement. Conventional hand-drawn cartoon animation often uses 15 frames/s in order to save on the number of drawings needed, but this is usually accepted because of the stylized nature of cartoons. Because it produces more realistic imagery computer animation demands higher frame rates to reinforce this realism.
The reason no jerkiness is seen at higher speeds is due to “persistence of vision.” From moment to moment, the eye and brain working together actually store whatever one looks at for a fraction of a second, and automatically "smooth out" minor jumps. Movie film seen in theaters in the United States runs at 24 frames per second, which is sufficient to create this illusion of continuous movement.
[edit]History

Main article: History of computer animation
See also: Timeline of computer animation in film and television
One of the earliest steps in the history of computer animation]] was the 1973 movie Westworld, a science-fiction film about a society in which robots live and work among humans, though the first use of 3D Wireframe imagery was in its sequel, Futureworld (1976), which featured a computer-generated hand and face created by then University of Utah graduate students Edwin Catmull and Fred Parke.
Developments in CGI technologies are reported each year at SIGGRAPH, an annual conference on computer graphics and interactive techniques, attended each year by tens of thousands of computer professionals. Developers of computer games and 3D video cards strive to achieve the same visual quality on personal computers in real-time as is possible for CGI films and animation. With the rapid advancement of real-time rendering quality, artists began to use game engines to render non-interactive movies. This art form is called machinima.
[edit]Methods of animating virtual characters



In this .gif of a 2D Flash animation, each 'stick' of the figure is keyframed over time to create motion.
In most 3D computer animation systems, an animator creates a simplified representation of a character's anatomy, analogous to a skeleton or stick figure. The position of each segment of the skeletal model is defined by animation variables, or Avars. In human and animal characters, many parts of the skeletal model correspond to actual bones, but skeletal animation is also used to animate other things, such as facial features (though other methods for facial animation exist). The character "Woody" in Toy Story, for example, uses 700 Avars, including 100 Avars in the face. The computer does not usually render the skeletal model directly (it is invisible), but uses the skeletal model to compute the exact position and orientation of the character, which is eventually rendered into an image. Thus by changing the values of Avars over time, the animator creates motion by making the character move from frame to frame.
There are several methods for generating the Avar values to obtain realistic motion. Traditionally, animators manipulate the Avars directly. Rather than set Avars for every frame, they usually set Avars at strategic points (frames) in time and let the computer interpolate or 'tween' between them, a process called keyframing. Keyframing puts control in the hands of the animator, and has roots in hand-drawn traditional animation.
In contrast, a newer method called motion capture makes use of live action. When computer animation is driven by motion capture, a real performer acts out the scene as if they were the character to be animated. His or her motion is recorded to a computer using video cameras and markers, and that performance is then applied to the animated character.
Each method has its advantages, and as of 2007, games and films are using either or both of these methods in productions. Keyframe animation can produce motions that would be difficult or impossible to act out, while motion capture can reproduce the subtleties of a particular actor. For example, in the 2006 film Pirates of the Caribbean: Dead Man's Chest, actor Bill Nighy provided the performance for the character Davy Jones. Even though Nighy himself doesn't appear in the film, the movie benefited from his performance by recording the nuances of his body language, posture, facial expressions, etc. Thus motion capture is appropriate in situations where believable, realistic behavior and action is required, but the types of characters required exceed what can be done through conventional costuming.
[edit]Creating characters and objects on a computer

3D computer animation combines 3D models of objects and programmed or hand "keyframed" movement. Models are constructed out of geometrical vertices, faces, and edges in a 3D coordinate system. Objects are sculpted much like real clay or plaster, working from general forms to specific details with various sculpting tools. A bone/joint animation system is set up to deform the CGI model (e.g., to make a humanoid model walk). In a process called rigging, the virtual marionette is given various controllers and handles for controlling movement. Animation data can be created using motion capture, or keyframing by a human animator, or a combination of the two.
3D models rigged for animation may contain thousands of control points - for example, the character "Woody" in Pixar's movie Toy Story, uses 700 specialized animation controllers. Rhythm and Hues Studios labored for two years to create Aslan in the movie The Chronicles of Narnia: The Lion, the Witch and the Wardrobe which had about 1851 controllers, 742 in just the face alone. In the 2004 film The Day After Tomorrow, designers had to design forces of extreme weather with the help of video references and accurate meteorological facts. For the 2005 remake of King Kong, actor Andy Serkis was used to help designers pinpoint the gorilla's prime location in the shots and used his expressions to model "human" characteristics onto the creature. Serkis had earlier provided the voice and performance for Gollum in J. R. R. Tolkien's The Lord of the Rings trilogy.
[edit]Computer animation development equipment



Computer animation can be created with a computer and animation software. Some impressive animation can be achieved even with basic programs; however, the rendering can take a lot of time on an ordinary home computer. Because of this, video game animators tend to use low resolution, low polygon count renders, such that the graphics can be rendered in real time on a home computer. Photorealistic animation would be impractical in this context.
Professional animators of movies, television, and video sequences on computer games make photorealistic animation with high detail. This level of quality for movie animation would take tens to hundreds of years to create on a home computer. Many powerful workstation computers are used instead. Graphics workstation computers use two to four processors, and thus are a lot more powerful than a home computer, and are specialized for rendering. A large number of workstations (known as a render farm) are networked together to effectively act as a giant computer. The result is a computer-animated movie that can be completed in about one to five years (this process is not comprised solely of rendering, however). A workstation typically costs $2,000 to $16,000, with the more expensive stations being able to render much faster, due to the more technologically advanced hardware that they contain. Pixar's Renderman is rendering software which is widely used as the movie animation industry standard, in competition with Mental Ray. It can be bought at the official Pixar website for about $3,500. It will work on Linux, Mac OS X, and Microsoft Windows based graphics workstations along with an animation program such as Maya and Softimage XSI. Professionals also use digital movie cameras, motion capture or performance capture, bluescreens, film editing software, props, and other tools for movie animation.
[edit]Modeling human faces

Main article: Computer facial animation
The modeling of human facial features is both one of the most challenging and sought after elements in computer-generated imagery. Computer facial animation is a highly complex field where models typically include a very large number of animation variables. Historically speaking, the first SIGGRAPH tutorials on State of the art in Facial Animation in 1989 and 1990 proved to be a turning point in the field by bringing together and consolidating multiple research elements, and sparked interest among a number of researchers.[1]
The Facial Action Coding System (with 46 action units such as "lip bite" or "squint") which had been developed in 1976 became a popular basis for many systems.[2] As early as 2001 MPEG-4 included 68 facial animation parameters for lips, jaws, etc., and the field has made significant progress since then and the use of facial microexpression has increased.[3][2]
In some cases, an affective space such as the PAD emotional state model can be used to assign specific emotions to the faces of avatars. In this approach the PAD model is used as a high level emotional space, and the lower level space is the MPEG-4 Facial Animation Parameters (FAP). A mid-level Partial Expression Parameters (PEP) space is then used to in a two level structure: the PAD-PEP mapping and the PEP-FAP translation model.[4]
[edit]The future

One open challenge in computer animation is a photorealistic animation of humans. Currently, most computer-animated movies show animal characters (A Bug's Life, Finding Nemo, Ratatouille, Ice Age, Over the Hedge), fantasy characters (Monsters Inc., Shrek, Teenage Mutant Ninja Turtles 4, Monsters vs. Aliens), anthropomorphic machines (Cars, WALL-E, Robots) or cartoon-like humans (The Incredibles, Despicable Me, Up). The movie Final Fantasy: The Spirits Within is often cited as the first computer-generated movie to attempt to show realistic-looking humans. However, due to the enormous complexity of the human body, human motion, and human biomechanics, realistic simulation of humans remains largely an open problem. Another problem is the distasteful psychological response to viewing nearly perfect animation of humans, known as "the uncanny valley." It is one of the "holy grails" of computer animation. Eventually, the goal is to create software where the animator can generate a movie sequence showing a photorealistic human character, undergoing physically-plausible motion, together with clothes, photorealistic hair, a complicated natural background, and possibly interacting with other simulated human characters. This could be done in a way that the viewer is no longer able to tell if a particular movie sequence is computer-generated, or created using real actors in front of movie cameras. Complete human realism is not likely to happen very soon,[citation needed] but when it does it may have major repercussions for the film industry.[citation needed]
For the moment it looks like three dimensional computer animation can be divided into two main directions; photorealistic and non-photorealistic rendering. Photorealistic computer animation can itself be divided into two subcategories; real photorealism (where performance capture is used in the creation of the virtual human characters) and stylized photorealism. Real photorealism is what Final Fantasy tried to achieve and will in the future most likely have the ability to give us live action fantasy features as The Dark Crystal without having to use advanced puppetry and animatronics, while Antz is an example on stylistic photorealism (in the future stylized photorealism will be able to replace traditional stop motion animation as in Corpse Bride). None of these mentioned are perfected as of yet, but the progress continues.
The non-photorealistic/cartoonish direction is more like an extension of traditional animation, an attempt to make the animation look like a three dimensional version of a cartoon, still using and perfecting the main principles of animation articulated by the Nine Old Men, such as squash and stretch.
While a single frame from a photorealistic computer-animated feature will look like a photo if done right, a single frame vector from a cartoonish computer-animated feature will look like a painting (not to be confused with cel shading, which produces an even simpler look).
[edit]Detailed examples and pseudocode

In 2D computer animation, moving objects are often referred to as “sprites.” A sprite is an image that has a location associated with it. The location of the sprite is changed slightly, between each displayed frame, to make the sprite appear to move. The following pseudocode makes a sprite move from left to right:
var int x := 0, y := screenHeight / 2;
while x < screenWidth
drawBackground()
drawSpriteAtXY (x, y) // draw on top of the background
x := x + 5 // move to the right
Computer animation uses different techniques to produce animations. Most frequently, sophisticated mathematics is used to manipulate complex three dimensional polygons, apply “textures”, lighting and other effects to the polygons and finally rendering the complete image. A sophisticated graphical user interface may be used to create the animation and arrange its choreography. Another technique called constructive solid geometry defines objects by conducting boolean operations on regular shapes, and has the advantage that animations may be accurately produced at any resolution.
Let's step through the rendering of a simple image of a room with flat wood walls with a grey pyramid in the center of the room. The pyramid will have a spotlight shining on it. Each wall, the floor and the ceiling is a simple polygon, in this case, a rectangle. Each corner of the rectangles is defined by three values referred to as X, Y and Z. X is how far left and right the point is. Y is how far up and down the point is, and Z is far in and out of the screen the point is. The wall nearest us would be defined by four points: (in the order x, y, z). Below is a representation of how the wall is defined
(0, 10, 0) (10, 10, 0)

(0,0,0) (10, 0, 0)
The far wall would be:
(0, 10, 20) (10, 10, 20)

(0, 0, 20) (10, 0, 20)
The pyramid is made up of five polygons: the rectangular base, and four triangular sides. To draw this image the computer uses math to calculate how to project this image, defined by three dimensional data, onto a two dimensional computer screen.
First we must also define where our view point is, that is, from what vantage point will the scene be drawn. Our view point is inside the room a bit above the floor, directly in front of the pyramid. First the computer will calculate which polygons are visible. The near wall will not be displayed at all, as it is behind our view point. The far side of the pyramid will also not be drawn as it is hidden by the front of the pyramid.
Next each point is perspective projected onto the screen. The portions of the walls ‘furthest’ from the view point will appear to be shorter than the nearer areas due to perspective. To make the walls look like wood, a wood pattern, called a texture, will be drawn on them. To accomplish this, a technique called “texture mapping” is often used. A small drawing of wood that can be repeatedly drawn in a matching tiled pattern (like wallpaper) is stretched and drawn onto the walls' final shape. The pyramid is solid grey so its surfaces can just be rendered as grey. But we also have a spotlight. Where its light falls we lighten colors, where objects blocks the light we darken colors.
Next we render the complete scene on the computer screen. If the numbers describing the position of the pyramid were changed and this process repeated, the pyramid would appear to move.


 
future technology Copyright © 2010 Future tech is Designed by pavan Home | RSS Feed | Comment RSS