اناتومی انجین بازی سازی


اناتومی انجین بازی سازی



سلام

گفتیم یه کمی توی کارمون تنوع بدیم . حالا می خوام یه سری مقاله راجب اناتومی ساخت یک انجین بازی رایانه ای بهتون بدم که واسه کسایی که می خوان توی این کار باشن می تونه خیلی مفید باشه .
درضمن چون دارم هم زمان از چند منبع ترجمه می کنم شاید این طول بکشه چون هم باید مفاهیم رو درست و ساده کنم هم باید خوب ترجمه کنم . فعلا بعد از هر 10 خط این رو اپدیت می کنم تا تموم بشه .
------------------------------------------------------------------------------------------------------------------

ما راه بسیار زیادی را از زمانی که Doom بازی میکردیم تا اینجا طی کرده ایم .اما ایا پیشگامی آن بازی ها در زمان خودشان فقط فقط مربوط به بازی بودن انها میشد؟ در ان زمان ها ابزار ها یا همان انجین های بازی سازی نیز در کار بودند که توانستند به موفقیت ان سری از بازی های کمک بسیار زیادی بکنند.

از ابتدا شروع می کنیم

بنابراین برای شروع کردن ابتدا باید درباره تفاوت انجین ها و تکنولوژی های به کار رفته در انها صحبت کنیم . بیشتر مردم گیم انجین را با محتوای داخلی یک بازی اشتباه میگیرند.حتی آن دسته از مردم این موضوع را قیاسی در مورد موتور ماشین ها یا ساختمان یک ماشین می پندارند .شما می توانید موتور یک ماشین را برداشته و با استفاده از آن قطعاتی دیگر را بسازید و دوباره از آن استفاده کنید.پروسه یک بازی هم تقریبا شبیه همچین موردی است . یک انجین نیز می تواند تمامی ابزار ها و تکنولوژی های غیر بازی را برای ساخت یک محصول جدید در کنار هم جمع اوری کند .
((ویکی پدیا)
موتور بازی مجموعه ای از ابزار توسعه دیداری علاوه بر مؤلفه‌های نرم‌افزاری با قابلیت استفاده مجدد را ارائه می دهد. این ابزارها معمولاً در یک محیط توسعه یکپارچه ارائه می شوند تا توسعه بازیها را با یک با رویکرد مبتنی بر داده ساده تر و سریع تر انجام دهند. موتورهای بازی را گاهی اوقات میان افزار بازی زیرا از نقطه نظر تجاری این اصطلاح، آنها یک سکوی نرم‌افزاری منعطف و قابل استفاده مجدد را ارائه می کنند که تمام کاربردهای موردنیاز را فراهم می آورند تا درحالیکه هزینه ها، پیچیدگی‌ها و زمان ارائه با بازار - که همگی این عوامل در صنعت رقابتی بازی‌های کامپیوتری حیاتی می باشند - کم می کند، توسعه و تولید بازی‌ها را امکان پذیر سازد.
تمامی بخش های یک بازی از یک سری محتوا مانند (انیمیشن ها . مدل ها . صدا ها . هوش مصنوعی و فیزیک ها) تشکیل شده اند که اینها را دارایی ها یا همان Asset های بازی می نامند. در اینجا ابزاری برای ساخت یک بازی نیاز است تا بتواند تمامی این موارد را کنترل و در کنار یکدیگر برنامه دهد .
برای نمونه ابتدا به ساختار یکی از بازی های معروف دنیا یعنی بازی Quake's نگاهی می اندازیم .



ساختار این گیم انجین را می توان در 11 بخش تقسیم بندی کرد . بله درسته 11 بخش . اکنون زمان آن رسیده که ما اولین بخش را مشاهده کنیم

Render



چه دلیلی وجود دارد که Render اینقدر مهم است؟ این مورد بسیار واضح میباشد . بدن آن ما هیچ چیزی نمی توانیم ببینیم .

این قسمت , بخش ها و اجزای بصری را برای بازیکن ترسیم می کند. بنابراین ناظر یا بازیکن می تواند تصمیم بگیرد که چه چیزی در صحنه نمایش داده شود.
عموماRender اولین چیزی است که یک سازنده انجین ان را می سازد . در اینجا اگر این وجود نداشته باشد شما از کجا می خواهید متوجه شوید که کد ها و فرمان های شما در صحنه در حال انجام شدن هستند ؟
یک رندر می تواند تا 50 درصد از منابع Cpu را به خود اختصاص دهد. این امر بسیار مهمی است که سازندگان بازی همیشه در ان سعی می کنند تا کمترین بار پردازشی را بر Cpu تحمیل و بیشترین کیفیت را از ان داشته با شند . بدون یک رندر خوب یک بازی به احتمال بسیار زیاد شکست خواهد خورد .
کارکردن با پیکسل ها امروزه با استفاده از کارت های گرافیکی سه بعدی امکان پذیر شده است .
در یک تعریف عمومی وظیفه رندر جمع اوری تمامی اجزای بصری بازی برای نمایش به کاربر میباشد . بهینه سازی این موضوع بسیار بسیار حیاتی است . به دلیل انکه در اثر برنامه گرفتن عملیات و محاسبات سه بعدی در چرخه پهنای باند حافظه امکان درگیر شدن تمامی حافظه وجود خواهد داشت و در این حالت امکان متوقف شدن این چرخه وجود خواهد داشت.



ماشین سواری در کوهستان

1:



اموزش فیزیک در یونیتی

2:

با نظر دوستان تمامی متون رو اینجا میزارم .


بازی دوندگان احمق Dumb Runner


پارت یک قسمت سوم .


درخواست پکر
((دوتای قبلش بالا ترجمه شدن))

-------------------------
Creating the 3D World

Recently I had a conversation with someone who has been in the computer graphics biz for years, and she confided with me that the first time she saw a 3D computer image being manipulated in real time she had no idea how it was done, and how the computer was able to store a 3D image.


جواب بدین لطفا. (پکر میخوام)
This is likely true for the average person on the street today, even if they play PC, console, or arcade games frequently.


ساخت بازی برای IOS
We'll discuss some of the details of creating a 3D world from a game designers perspective below, but you should also read [ برای مشاهده لینک ، با نام کاربری خود وارد شوید یا ثبت نام کنید ] 's three-part [ برای مشاهده لینک ، با نام کاربری خود وارد شوید یا ثبت نام کنید ] for a structured overview of all the main processes involved in generating a 3D image.


برنامه دور همی مجازی بازی سازان




3D objects are stored as points in the 3D world (called vertices), with a relation to each other, so that the computer knows to draw lines or filled surfaces between these points in the world.


سلام
So a box would have 8 points, one for each of the corners.

There are 6 surfaces for the box, one for each of the sides it would have.

This is pretty much the basis of how 3D objects are stored.

When you start getting down to some of the more complicated 3D stuff, like a Quake level for example, you are talking about thousands of vertices (sometimes hundreds of thousands), and thousands of polygonal surfaces.

See the above graphic for a wireframe representation.

Essentially though, it relates to the box example above, only with lots and lots of small polygons to make up complicated scenes.


How models and worlds are stored is a part of the function of the renderer, more than it is part of the application / game.

The game logic doesn't need to know how objects are represented in memory, or how the renderer is going to go about displaying them.

The game simply needs to know that the renderer is going to represent objects using the correct view, and displaying the correct models in their correct frames of animation.


In a good engine, it should be possible to completely replace the renderer with a new one, and not touch a line of game code.

Many cross-platform engines, such as the Unreal engine, and many homegrown console engines do just that—for example, the renderer model for the GameCube version of the game can be replaced, and off you go.


Back to internal representation-- there's more than one way to represent points in space in computer memory beyond using a coordinate system.

You can do it mathematically, using an equation to describe straight or curved lines, and derive polygons, which pretty much all 3D cards use as their final rendering primitive.

A primitive is the lowest rendering unit you can use on any card, which for almost all hardware now is a three-point polygon (triangle).

The newer nVidia and ATI cards do allow you render mathematically (called higher-order surfaces), but since this isn't standard across all graphics cards, you can't depend on it as a rendering strategy just yet.

This is usually somewhat expensive from a processing perspective, but it's often the basis for new and experimental technologies, such as terrain rendering or making hard-edged objects have softer edges.

We'll define these higher-order surfaces a little more in the patches section below.




3:


4:

Culling Overview

Here's the problem.

I have a world described in several hundred thousand vertices / polygons.

I have a first person view that's located on one side of our 3D world.

In this view are some of the world's polygons, though others are not visible, because some object or objects, like a visible wall, is obscuring them.

Even the best game coders can't handle 300,000 triangles in the view on a current 3D card and still maintain 60fps (a key goal).

The cards simply can't handle it, so we have to do some coding to remove those polygons that aren't visible before handing them to the card.

The process is called culling





If you don't see it, it isn't there.

By culling the non-visible parts of a 3D world, a game engine can reduce its workload considerably.

Look at this scene and imagine that there's a room behind the one under construction, but if it's not visible from this vantage point, the other room's geometry and other 3D data can be discarded.

There are many different approaches to culling.

Before we get into that however, let's discuss why the card can't handle super-high polygon counts.

I mean, doesn't the latest card handle X million polygons per second? Shouldn't it be able to handle anything? First, you have to understand that there are such things as marketing polygon rates, and then real world polygon rates.

Marketing polygon rates are the rates the card can achieve theoretically.


How many polygons can it handle if they are all on screen, the same texture, and the same size, without the application that's throwing polygons at the card doing anything except throwing polygons at the card.

Those are numbers the graphics chip vendors throw at you.

However, in real gaming situations the application is often doing lots of other things in the background -- doing the 3D transforms for the polygons, lighting them, moving more textures to the card memory, and so on.

Not only do textures get sent to the card, but the details for each polygon too.

Some of the newer cards allow you to actually store the model / world geometry details within the card memory itself, but this can be costly in terms of eating up space that textures would normally use, plus you'd better be sure you are using those model vertexes every frame, or you are just wasting space on the card.

But we're digressing here.

The key point is that what you read on the side of the box isn't necessarily what you would get when actually using the card, and this is especially true if you have a slow CPU, or insufficient memory.




5:


6:

من برای کمک آماده ام.


7:


8:

Basic Culling Methods

The simplest approach to culling is to divide the world up into sections, with each section having a list of other sections that can be seen.

That way you only display what's possible to be seen from any given point.

How you create the list of possible view sections is the tricky bit.

Again, there are many ways to do this, using BSP trees, Portals and so on.


I'm sure you've heard the term BSP used when talking about Doom or Quake.

It stands for Binary Space Partitioning.

This is a way of dividing up the world into small sections, and organizing the world polygons such that it's easy to determine what's visible and what's not -- handy for software based renderers that don't want to be doing too much overdrawing.

It also has the effect of telling you where you are in the world in a very efficient fashion.


[

A Portal based engine (first really brought to the gaming world by the defunct project Prey from 3D Realms) is one where each area (or room) is built as its own model, with doors (or portals) in each section that can view another section.

The renderer renders each section individually as separate scenes.

At least that's the theory.

Suffice to say this is a required part of any renderer and is more often than not of great importance.

Some of these techniques fall under the heading of "occlusion culling", but all of them have the same intent: eliminate unnecessary work early.



For an FPS (first-person shooter game) where there are often a lot of triangles in view, and the player assumes control of the view, it's imperative that the triangles that can't be seen be discarded, or culled.

The same holds true for space simulations, where you can see for a long, long way -- culling out stuff beyond the visual range is very important.

For games where the view is controlled -- like an RTS (real-time strategy game)-- this is usually a lot easier to implement.

Often this part of the renderer is still in software, and not handed off to the card, but it's pretty much only a matter of time before the card will do it for you.

9:


10:

من احساس می کنم بعضی مطالبش قدیمیه شاید برای دوستان جالب نباشه.

بخصوص این که مطلب برای سال 2002 هست.


11:

Basic Graphics Pipeline Flow

A simple example of a graphics pipeline from game to rendered polygons might flow something like this:

  • Game determines what objects are in the game, what models they have, what textures they use, what animation frame they might be on, and where they are located in the game world.

    The game also determines where the camera is located and the direction it's pointed.
  • Game passes this information to the renderer.

    In the case of models, the renderer might first look at the size of the model, and where the camera is located, and then determine if the model is onscreen at all, or to the left of the observer (camera view), behind the observer, or so far in the distance it wouldn't be visible.

    It might even use some form of world determination to work out if the model is visible (see next item).
  • The world visualization system determines where in the world the camera is located, and what sections / polygons of the world are visible from the camera viewpoint.

    This can be done many ways, from a brute force method of splitting the world up into sections and having a straight "I can see sections AB&C from section D" for each part, to the more elegant BSP (binary space partitioned) worlds.

    All the polygons that pass these culling tests get passed to the polygon renderer.
  • For each polygon that is passed into the renderer, the renderer transforms the polygon according to both local math (i.e.

    is the model animating) and world math (where is it in relation to the camera?), and then examines the polygon to determine if it is back-faced (i.e.

    facing away from the camera) or not.

    Those that are back-faced are discarded.

    Those that are not are lit, according to whatever lights the renderer finds in the vicinity.

    The renderer then looks at what texture(s) this polygon uses and ensures the API/graphics card is using that texture as its rendering base.

    At this point the polygons are fed off to the rendering API and then onto the card.

Obviously this is very simplistic, but you get the idea.

The following chart is excerpted from Dave Salvator's 3D pipeline story, and gives you some more specifics:
3D Pipeline - High-Level Overview 1.

Application/Scene



  • Scene/Geometry database traversal
  • Movement of objects, and aiming and movement of view camera
  • Animated movement of object models
  • Description of the contents of the 3D world
  • Object Visibility Check including possible Occlusion Culling
  • Select Level of Detail (LOD)

2.

Geometry



  • Transforms (rotation, translation, scaling)
  • Transform from Model Space to World Space (Direct3D)
  • Transform from World Space to View Space
  • View Projection
  • Trivial Accept/Reject Culling
  • Back-Face Culling (can also be done later in Screen Space)




  • Lighting
  • Perspective Divide - Transform to Clip Space
  • Clipping
  • Transform to Screen Space

3.

Triangle Setup



  • Back-face Culling (or can be done in view space before lighting)
  • Slope/Delta Calculations
  • Scan-Line Conversion

4.

Rendering / Rasterization



  • Shading
  • Texturing
  • Fog
  • Alpha Translucency Tests
  • Depth Buffering
  • Antialiasing (optional)
  • Display



Usually you would feed all the polygons into some sort of list, and then sort this list according to texture (so you only feed the texture to the card once, rather than per polygon), and so on.

It used to be that polygons would be sorted using distance from the camera, and those farthest away rendered first, but these days, with the advent of Z buffering that is less important.

Except of course, for those polygons that have transparency in them.

These have to be rendered after all the non translucent polygons are done, so that what's behind them can show up correctly in the scene.

Of course, given that, you'd have to render these polygons back-to-front as a matter of course.

But often in any given FPS scene there generally aren't too many of transparent polys.

It might look like there are, but actually in comparison to those polygons that don't have alpha in them, it's a pretty low percentage.


Once the application hands the scene off to the API, the API in turn can take advantage of hardware-accelerated transform and lighting (T&L), which is now commonplace in 3D cards.

Without going into an explanation of the matrix math involved (see [ برای مشاهده لینک ، با نام کاربری خود وارد شوید یا ثبت نام کنید ] ), transforms allow the 3D card to render the polygons of whatever you are trying to draw at the correct angle and at the correct place in the world relative to where your camera happens to be pointing at any given moment.


There are a lot of calculations done for each point, or vertex, including clipping operations to determine if any given polygon is actually viewable, due to it being off screen or partially on screen.

Lighting operations work out how bright textures' colors need to be, depending on how light in the world falls on this vertex, and from what angle.

In the past, the CPU handled these calculations, but now current-generation graphics hardware can do it for you, which means your CPU can go off and do other stuff.

Obviously this is a Good Thing(tm), but since you can't depend on all 3D cards out there having T&L on board, you will have to write all these routines yourself anyway (again speaking from a game developer perspective).

You'll see the "Good Thing(tm)" phrase throughout various segments of this story.

These are features I think make very useful contributions to making games look better.

Not surprisingly, you'll also see its opposite; you guessed it, a Bad Thing(tm) as well.

I'm getting these phrases copyrighted, but for a small fee you can still use them too.



12:


13:

من اونقدرا تو قسمت فنی تخصص ندارم.

ولی به نظرم قسمت 3 بخش Musings on Memory Usage یه کم قدیمی می یاد.


14:

میخواهی قسمت به قسمت مطالب رو بزار تو تاپیک به صورت انگلیسی که اگه کسی دید قسمتی رو میتونه ترجمه کنه .

کسایی هم که خودشون انگلیسی بلدن هم میتونن بخونن هم اگه خواستن ترجمه کنن
بعدش هم که کامل شد یه جا جمع کن و بزار واسه دانلود و ...


15:


16:

((ترجمه توسط دوست عزیزم Anti military))

علاوه بر triangle ها patches حالا معمولا بیشتر مورد هستفاده برنامه میگیرن ! patches ( اسم دیگه higher-order surfaces ) خیلی عالی هستن چون توان توصیف geometry ( معمولا geometry شامل یه سری منحنی ) رو با یه بیان ریاضی سریع تر از لیست کردن تیکه های polygon ها و موقعیت قرارگیریشون تو محیط بازی دارن .

از این روش شما میتونید polygon هاتون رو دقیقا بسازید ( و فرم بدید ) , و تصمیم بگیرید دقیقا چه مقدار از polygon هارو از patch میخواهید ببینید .

میتونید برای مثال یه pipe توصیف کنید, و بعدش تعداد زیادی نمونه از این pipe رو تو محیط داشته باشید .

تو بعضی از اتاق ها جایی که شما قبلا 10,000 تا از polygon ها رو نمایش دادید میتونید بگید OK این pipe فقط میتونه 100 تا polygons داشته باشه ! چون ما قبلا تعداد بسیار بسیار زیادی از polygon ها رو نمایش دادیم و هرچقدر که بیشتر بشه میتونه فریم ریتم رو کاهش بده .

اما تو اتاق دیگه ای جایی که فقط 5,000 تا polygon توی دید داریم, شما میتونید بگید, حالا این pipe میتونه 500 تا polygon داشته باشه ! چون هنوز به بودجه نمایش polygon ها برای این فریم نرسیدیم ( فکر کنم منظورش اینه که هنوز جا داریم واسه اضافه کردن Polygon ها !!! ) لوازم خیلی عالی هست , اما بعدش شما مجبورید در درجه اول همه اینها رو رمزگشایی کنید و مش ها رو ایجاد کنید, و اینکار بیهوده نیست ! اینجا یه صرفه جویی واقعی از فرستادن معادلات patch's سرتاسر AGP در برابر فرستادن boatloads از ورتکس های شکل دهنده آبجکت های یکسان صورت میگیره ! SOF2 با یک تنوع در این روش جهت ساخت سیستم terrain مورد هستفاده برنامه میگیره ...




در واقع ATI حالا TruForm رو داره, که میتونه یه مدل triangle-based رو بگیره و این مدلو برپايه higher-order surfaces برای smooth کردنشون تبدیل کنه .

و سپس دوباره تبدیلشون کنه به یک higher triangle-count model ( به صورت retesselation خطاب میشه ) بنابراین سپس مدل برای ادامه پردازش pipeline رو خارج میکنه .

در واقع ATI دقیقا قبل از T&L موتورشون یه مرحله رو اضافه کردن که این پردازش رو کنترل کنه.

اشکال اینجا کنترل هست که چی اسموت بگیره و چی نگیره ! معمولا برخی از لبه های سخت رو که میخواهید بزارید , مثل دماغ ممکنه نا مناسب و ناجور smooth بشه , هنوز این یه تکنولوژی هوشمندانه هست و من میتونم ببینم که در آینده بیشتر از این تکنولوژی هستفاده میشه ...

این قسمت اول بود --- ما تو آینده توضیحات مقدماتی متریال تو پارت دومو ادامه میدیم با نور پردازی و تکسچرینگ محیط, و بعدش وارد قسمت های عمیق تر بعدی میشیم ...


17:


18:

During the transform process, usually in a coordinate space that's known as view space, we get to one of the most crucial operations: lighting.

It's one of those things that when it works, you don't notice it, but when it doesn't, you notice it all too much.

There are various approaches to lighting, ranging from simply figuring out how a polygon is oriented toward a light, and adding a percentage of the light's color based on orientation and distance to the polygon, all the way to generating smooth-edged lighting maps to overlay on basic textures.

And some APIs will actually offer pre-built lighting approaches.

For example, OpenGL offers per polygon, per vertex, and per pixel lighting.


[

In vertex lighting, you determine how many polygons are touching one vertex and then take the mean of all the resultant polygons orientations (called a normal) and assign that normal to the vertex.

Each vertex for a given polygon will point in slightly different directions, so you wind up gradating or interpolating light colors across a polygon, in order to get smoother lighting.

You don't necessarily see each individual polygon with this lighting approach.

The advantage of this approach is that hardware can often help do this in a faster manner using hardware transform and lighting (T&L).

The drawback is that it doesn't produce shadowing.

For instance, both arms on a model will be lit the same way, even if the light is on the right side of the model, and the left arm should be left in shadow cast by the body.


These simple approaches use shading to achieve their aims.

For flat polygon lighting when rendering a polygon, you ask the rendering engine to tint the polygon to a given color all over.

This is called flat shading lighting (each polygon reflects a specific light value across the entire polygon, giving a very flat effect in the rendering, not to mention showing exactly where the edges of each polygon exist).


For vertex shading (called Gouraud shading) you ask the rendering engine to tint each vertex provided with a specific color.

Each of these vertex colors is then taken into account when rendering each pixel depending on its distance from each vertex based on interpolating.

(This is actually what Quake III uses on its models, to surprisingly good effect).


Then there's Phong shading.

Like Gouraud shading, this works across the texture, but rather than just using interpolation from each vertex to determine each pixels color, it does the same work for each pixel that would be done for each vertex.

For Gouraud shading, you need to know what lights fall on each vertex.

For Phong, you do this for each pixel.


Not surprisingly, Phong Shading gives much smoother effects, but is far more costly in rendering time, since each pixel requires lighting calculations.

The flat shading method is fast, but crude.

Phong shading is more computationally expensive than Gouraud shading, but gives the best results of all, allowing effects like specularity ("shiny-ness").These are just some of the tradeoffs you must deal with in game development.




---------- Post added at 10:30 AM ---------- Previous post was at 10:29 AM ----------

In a Different Light

Next up is light map generation, where you use a second texture map (the light map) and blend it with the existing texture to create the lighting effect.

This works quite well, but is essentially a canned effect that's pre-generated before rendering.

But if you have dynamic lights (i.e.

lights that move, or get turned on and off with no program intervention) then you will have to regenerate the light maps every frame, modifying them according to how your dynamic lights may have moved.

Light maps can render quickly, but they are very expensive in terms of memory required to store all these textures.

You can use some compression tricks to make them take less memory space, or reduce their size, even make them monochromatic (though if you do that, you don't get colored lights), and so on.

But if you do have multiple dynamic lights in the scene, regenerating light maps could end up being expensive in terms of CPU cycles.


Usually there's some kind of hybrid lighting approach used in many games.

Quake III for instance, uses light maps for the world, and vertex lighting for the animating models.

Pre-processed lights don't affect the animated models correctly--they take their overall light value for the whole model from the polygon they are standing on--and dynamic lights will be applied to give the right effect.

Using a hybrid lighting approach is a tradeoff that most people don't notice, but it usually gives an effect that looks "right".

That's what games are all about--going as far as necessary to make the effect look "right", but not necessarily correct.


Of course all that goes out the window for the new Doom engine, but then that's going to require a 1GHz CPU and a GeForce 2 at the very least to get all the effects.

Progress it is, but it does all come at a price.


Once the scene has been transformed and lit, we move on to clipping operations.

Without getting into gory detail, clipping operations determine which triangles are completely inside the scene (called the view frustum) or are partially inside the scene.

Those triangles completely inside the scene are said to be trivially accepted, and they can be processed.

For a given triangle that is partially inside the scene, the portion outside the frustum will need to be clipped off, and the remaining polygon inside the frustum will need to be retesselated so that it fits completely inside the visible scene.

).


Once the scene has been clipped, the next stage in the pipeline is the triangle setup phase (also called scan-line conversion) where the scene is mapped to 2D screen coordinates.

At this point we get into rendering operations.




---------- Post added at 10:31 AM ---------- Previous post was at 10:30 AM ----------

Textures and MIP Mapping

Textures are hugely important to making 3D scenes look real, and are basically little pictures that you break up into polygons and apply to an object or area in a scene.

Multiple textures can take up a lot of memory, and it helps to manage their size with various techniques.

Texture compression is one way of making texture data smaller, while retaining the picture information.

Compressed textures take up less space on the game CD, and more importantly, in memory and on your 3D card.

Another upside is that when you ask the card to display the texture for the first time, the compressed (smaller) version is sent from the PC main memory across the AGP interconnect to the 3D card, making everything that little bit faster.

Texture compression is a Good Thing.

We'll discuss more about texture compression below.


MIP Mapping

Another technique used by game engines to reduce the memory footprint and bandwidth demands of textures is to use MIP maps.

The technique of MIP mapping involves preprocessing a texture to create multiple copies, where each successive copy is one-half the size of the prior copy.

Why would you do this? To answer that, you need to understand how 3D cards display a texture.

In the worst case you take a texture, stick it on a polygon, and just whack it out to the screen.

Let's say there's a 1:1 relationship, so one texel (texture element) in the original texture map corresponds to one pixel on a polygon associated with the object being textured.

If the polygon you are displaying is scaled down to half size, then effectively the texture is displaying every other texel.

Now this is usually OK -- but can lead to some visual weirdness in some cases.

Let's take the idea of a brick wall.

Say the original texture is a brick wall, with lots of bricks, but the mortar between them is only one pixel wide.

If you scale the polygon down to half-size, and if only every other texel is applied, all of sudden all your mortar vanishes, since it's being scaled out.

It just gives you weird images.




With MIP mapping, you scale the image yourself, before the card gets at it, and since you can pre-process it, you do a better job of it, so the mortar isn't just scaled out.

When the 3D card draws the polygon with the texture on it, it detects the scale factor and says, "you know, instead of just scaling the largest texture, I'll use the smaller one, and it will look better." There.

MIP mapping for all, and all for MIP mapping.




---------- Post added at 10:32 AM ---------- Previous post was at 10:31 AM ----------

Multiple Textures and Bump Mapping

Single texture maps make a large difference in overall 3D graphics realism, but using multiple textures can achieve even more impressive effects.

This used to require multiple rendering passes that ate fill rate for lunch.

But with multi-piped 3D accelerators like ATI's Radeon and nVidia's GeForce 2 and above, multiple textures can often be applied in a single rendering pass.

When generating multitexture effects, you draw one polygon with one texture on it, then render another one right over the top with another texture, but with some transparency to it.

This allows you to have textures appearing to move, or pulse, or even to have shadows (as we described in the lighting section).

Just draw the first texture, then draw a texture that is all black but has a transparency layer over the top of it, and voila -- instant shadowing.

This technique is called light mapping (or sometimes-dark mapping), and up until the new Doom has been the traditional way that levels are lit in Id engines.


Bump mapping is an old technology that has recently come to the fore.

Matrox was the first to really promote various forms of bump mapping in popular 3D gaming a few years ago.

It's all about creating a texture that shows the way light falls on a surface, to show bumps or crevices in that surface.

Bump mapping doesn't move with lights-- it's designed to be used for creating small imperfections on a surface, not for large bumps.

For instance you could use bump mapping to create seeming randomness to a terrain's detail in a flight simulator, rather than use the same texture repeatedly, which doesn't look very interesting.





Bump mapping creates a good deal more apparent surface detail, although there's a certain amount of sleight of hand going on here, since by strict definition it doesn't change relative to your viewing angle.

Given the per-pixel operations that the newer ATI and nVidia cards can perform, this default viewing angle drawback isn't really a hard and fast rule anymore.

Either way, it hasn't been used much by game developers since until recently; more games can and should use bump mapping.




---------- Post added at 10:35 AM ---------- Previous post was at 10:32 AM ----------

Cache Thrash = Bad Thing

Texture cache management is vital to making game engines go fast.

Like any cache, hits are good, and misses are bad.

If you get into a situation where you've got textures being swapped in and out of your graphics card's memory, you've got yourself a case of texture cache thrashing.

Often APIs will dump every texture when this happens, resulting in every one of them having to be reloaded next frame, and that's time consuming and wasteful.

To the gamer, this will cause frame rate stutters as the API reloads the texture cache.




There are various techniques for keeping texture cache thrashing to a minimum, and fall under the rubric of texture cache management-- a crucial element of making any 3D game engine go fast.

Texture management is a Good Thing--what that means is only asking the card to use a texture once, rather than asking it to use it repeatedly.

It sounds contradictory, but in effect it means saying to the card, "look, I have all these polygons and they all use this one texture, can we just upload this once instead of many times?" This stops the API (or software behind the graphics drivers) from uploading the one texture to the card more than once.

An API like OpenGL actually usually handles texture caching and means that the API handles what textures are stored on the card, and what's left in main memory, based on rules like how often the texture is accessed.

The real issue comes here in that a) you don't often know the exact rules the API is using and b) often you ask to draw more textures in a frame than there is space in the card to hold them.


Another texture cache management technique is texture compression, as we discussed a bit earlier.

Textures can be compressed much like wave files are compressed to MP3 files, although with nowhere near the compression ratio.

Wave to MP3 compression yields about an 11:1 compression ratio, whereas most texture compression algorithms supported in hardware are more like 4:1, but even that can make a huge difference.

In addition, the hardware decompressed textures only as it needs them on the fly as it is rendering.

This is pretty cool, but we've only just scratched the surface of what's possible there in the future.


As mentioned, another technique is ensuring that the renderer only asks the card to render one texture once.

Ensure that all the polygons that you want the card to render using the same texture get sent across at once, rather than doing one model here, another model there, and then coming back to the original texture again.

Just do it once, and you only transfer it across the AGP interconnect once.

Quake III does this with its shader system.

As it processes polygons it adds them to an internal shader list, and once all the polygons have been processed, the renderer goes through the texture list sending across the textures and all the polygons that use them in one shot.


The above process does tend to work against using hardware T&L on the card (if it is present) efficiently.

What you end up with are large numbers of small groups of polygons that use the same texture all over the screen, all using different transformation matrices.

This means more time spent setting up the hardware T&L engine on the card, and more time wasted.

It works OK for actual onscreen models, because they tend to use a uniform texture over the whole model anyway.

But it does often play hell with the world rendering, because many polygons tend to use the same wall texture.

It's usually not that big of a deal because by and large, textures for the world don't tend to be that big, so your texture caching system in the API will handle this for you, and keep the texture around on the card ready for use again.


On a console there usually isn't a texture caching system (unless you write one).

In the case of the PS2 you'd be better off going with the "'texture once" approach.

On the Xbox it's immaterial since there is no graphics memory per se (it's a UMA architecture), and all the textures stay in main memory all the time anyway.


Trying to whack too many textures across the AGP interconnect is, in actual fact, the second most common bottleneck in modern PC FPS games today.

The biggest bottleneck is the actual geometry processing that is required to make stuff appear where it's supposed to appear.

The math involved in generating the correct world positions for each vertex in models is by far the most time consuming thing that 3D FPSes do these days.

Closely followed by shoving large numbers of textures across the AGP interconnect if you don't keep your scene texture budget under control.

You do have the capability to affect this, however.

By dropping your top MIP level (remember that--where the system constantly subdivides your textures for you?), you can halve the size of the textures the system is trying to push off to the card.

Your visual quality goes down--especially noticeable in cinematic sequences--but your frame rate goes up.

This approach is especially helpful for online games.

Both Soldier of Fortune II and Jedi Knight II: Outcast are actually designed with cards in mind that aren't really prevalent in the marketplace yet.

In order to view the textures at their maximum size, you would need a minimum of 128MB on your 3D card.

Both products are being designed with the future in mind.


And that wraps Part II.

In the next segment, we'll be introducing many topics, including memory management, fog effects, depth testing, anti-aliasing, vertex shaders, APIs, and more.


---------- Post added at 10:39 AM ---------- Previous post was at 10:35 AM ----------

Let's consider how 3D card memory is actually used today and how it will be used in the future.

Most 3D cards these days handle 32-bit color, which is 8 bits for red, 8 for blue, 8 for green, and 8 for transparency of any given pixel.

That's 256 shades of red, blue, and green in combination, which allows for 16.7 million colors-- that's pretty much all the colors you and I are going to be able to see on a monitor.

So why is game design guru [ برای مشاهده لینک ، با نام کاربری خود وارد شوید یا ثبت نام کنید ] calling for 64-bit color resolution? If we can't see the difference, what's the point? The point is this: let's say we have a point on a model where several lights are falling, all of different colors.

We take the original color of the model and then apply one light to it, which changes the color value.

Then we apply another light, which changes it further.

The problem here is that with only 8 bits to play with, after applying 4 lights, the 8 bits just aren't enough to give us a good resolution and representation of the final color.

The lack of resolution is caused by quantization errors, which are essentially rounding errors resulting from an insufficient number of bits.

You can very quickly run out of bits, and as such, all the colors tend to get washed out.

With 16 or 32 bits per color, you have a much higher resolution, so you can apply tint after tint to properly represent the final color.

Such color-depths can quickly consume much storage space.


We should also mention the whole card memory vs.

texture memory thing.

What's going on here is that each 3D card really only has a finite amount of memory on board to stuff the front and back buffers, the z-buffer, plus all the wonderful textures.

With the Original Voodoo1 card, it was 2MB, then came the Riva TNT, which upped it to16MB.

Then the GeForce and ATI Rage gave you 32MB, now some versions of the GeForce 2 through 4 and Radeons come with 64MB to 128MB.

Why is this important? Well, let's crunch some numbers…
Let's say you want to run your game using a 32-bit screen at 1280x1024 with a 32-bit Z-buffer because you want it to look the best it can.

OK, that's 4 bytes per pixel for the screen, plus 4 bytes per pixel for the z-buffer, since both are 32 bits wide per pixel.

So we have 1280x1024 pixels -- that's 1,310,720 pixels.

Multiply that by 8 based on the number of bytes for the front buffer and the Z-buffer, and you get 10,485,760 bytes.

Include a back buffer, and you have 1280x1024x12, which is 15,728,640 bytes, or 15MB.

On a 16MB card, that would leave us with just 1MB to store all the textures.

Now if the original textures are true 32 bits or 4 bytes wide, as most stuff is these days, then we can store 1MB / 4 bytes per pixel = 262,144 pixels of textures per frame on the card itself.

That's about 4 256x256 texture pages.


Clearly, the above example shows that the older 16MB frame buffers have nowhere near enough memory for what a modern game requires to draw its prettiness these days.

We'd obviously have to reload textures per frame to the card while it's drawing.

That's actually what the AGP bus was designed to do, but still, AGP is slower than a 3D card's frame buffer, so you'dincur a sizeable performance hit.

Obviously if you drop the textures down to 16-bit instead of 32-bit, you could push twice as many lower resolutions across AGP.

Also, if you ran at a lower color resolution per pixel, then more memory is available on the card for keeping often used textures around (called caching textures).

But you can never actually predict how users will set up their system.

If they have a card that runs at high resolutions and color depths, then chances are they'll set their cards that way.




---------- Post added at 10:40 AM ---------- Previous post was at 10:39 AM ----------

Let's Get Down and Get Foggy

Now we come to fog, since it is a visual effect of sorts.

Most engines these days can handle this, as it comes in mighty handy for fading out the world in the distance, so you don't see models and scene geography popping on in the distance as they come into visual range crossing the far clipping plane.

There's also a technique called volumetric fogging.

For the uninitiated, this is where fog isn't a product of distance from the camera, but is an actual physical object that you can see, travel through, and pass out the other side-- with the visual fog levels changing as you move through the object.

Think of traveling through a cloud -- that's a perfect example of volumetric fogging.

A couple of good examples of implementation of volumetric fogging are Quake III's red mist on some of their levels, or the new Lucas Arts GameCube version of Rogue Squadron II.

That has some of the best-looking clouds I've ever seen -- about as real as you can get.




While we are talking about fogging, it might be a good time to briefly mention alpha testing and alpha blending for textures.

When the renderer goes to put a specific pixel on the screen, assuming it's passed the Z-buffer test (defined below), we might end up doing some alpha testing.

We may discover that the pixel needs to be rendered transparently to show some of what's behind it.

Meaning that we have to retrieve the pixel that's already there, mix in our new pixel and put the resulting blended pixel back in the same location.

This is called a read-modify-write operation, and it's far more time-consuming than an ordinary pixel write.


There are different types of mixing (or blending) that you can do, and these different effects are called blend modes.

Straight Alpha blending simply adds a percentage of the background pixel to an inverse percentage of the new pixel.

Then there's additive blending, which takes a percentage of the old pixel, and just adds a specific amount of the new pixel (rather than a percentage).

This gives much brighter effects (Kyle's Lightsaber effect in Jedi Knight II does this, to give the bright core).


With each new card we see from vendors, we get newer and more complex blending modes made available to us in hardware to do more and crazier effects.

And of course with the per pixel operations available in the GF3+4 and latest Radeon boards, the sky is the limit.




---------- Post added at 10:40 AM ---------- Previous post was at 10:40 AM ----------

Stencil Shadowing and Depth Testing

With stencil shadowing, things gets complicated and expensive.

Without going into too many gory details right now (this could be an article all by itself), the idea is to render a view of a model from the light source's perspective and then use this to create or cast a polygon with this texture's shape onto the affected receptor surfaces.

You are actually casting a light volume which will 'fall' on other polygons in the view.

You end up with a real-looking lighting, that even has perspective built into it.

But it's expensive, because you're creating textures on the fly, and doing multiple renders of the same scene.


You can create shadows a multitude of different ways, and as is often the case, the rendering quality is proportional to the rendering work needed to pull off the effect.

There's delineation between what are called hard or soft shadows, with the latter being preferable, since they more accurately model how shadows usually behave in the real world.

There are several "good enough" methods that are generally favored by game developers.

For more on shadows, check out

Depth Testing

Now we come to depth testing, where occluded pixels are discarded and the concept of overdraw comes into play.

Overdraw is pretty straightforward-- it's just the number of times you've drawn one pixel location in a frame.

It's based on the number of elements existing in the 3D scene in the Z (depth) dimension, and is also called depth complexity.

If you do this overdrawing often enough -- for instance to have dazzling visual special effects for spells, like Heretic II had, then you can reduce your frame rate to a crawl.

Some of the initial effects designed in Heretic II, when several people where on screen throwing spells at each other, resulted in situations where they were drawing the equivalent of every pixel in the screen some 40 times in one frame! Needless to say, this was something that had to be adjusted, especially for the software renderer, which simply couldn't handle this load without reducing the game to a slide show.

Depth testing is a technique used to determine which objects are in front of other objects at the same pixel location, so we can avoid drawing objects that are occluded.


[/URL]
Look at this scene and think about what you can't see.

In other words, what's in front of, or occluding other scene objects? Depth testing makes that determination.

I'll explain exactly how depth testing helps improve frame rates.

Imagine a scene that's pretty detailed, with lots of polygons (or pixels) behind each other, without a fast way to discard them before the renderer gets them.

By sort-ordering (in the Z-dimension) your non-alpha blended polygons so those closest to you are rendered first, you fill the screen with pixels that are closest first.

So when you come to render pixels that are behind these (as determined by Z or depth testing), they get discarded quickly, avoiding blending steps and saving time.

If you rendered these back to front, all the occluded objects would be rendered completely, then completely overwritten with others.

The more complex the scene, the worse this situation could get, so depth testing is a Good Thing!


---------- Post added at 10:41 AM ---------- Previous post was at 10:40 AM ----------

Antialiasing

Let's quickly review anti-aliasing.

When rendering an individual polygon, the 3D card takes a good look at what's been rendered already, and will blur the edges of the new polygon so you don't get jagged pixel edges that would otherwise be plainly visible.

This technique is usually be handled one of two ways.

The first approach is at the individual polygon level, which requires you to render polygons from back to front of the view, so each polygon can blend appropriately with what's behind it.

If you render out of order, you can end up with all sorts of strange effects.

In the second approach, you render the whole frame at a much larger resolution than you intend to display it, and then when you scale the image down your sharp jagged edges tend to get blended away in the scaling.

This second approach gives nice results, but requires a large memory footprint, and a ton of memory bandwidth, as the card needs to render more pixels than are actually in the resulting frame..

Most new cards handle this fairly well, but still have multiple antialiasing modes you can choose, so you can trade off performance vs.

quality.

For a more detailed discussion of various popular antialiasing techniques used today,


---------- Post added at 10:41 AM ---------- Previous post was at 10:41 AM ----------

Vertex and Pixel Shaders

Before we leave rendering technology, lets chat quickly about vertex and pixel shaders, since they are getting a fair amount of attention recently.

Vertex Shaders are a way of getting directly at the features of the hardware on the card without using the API very much.

For example, if a card has hardware T&L, you can either write DirectX or OpenGL code and hope your vertices go through the T&L unit (there is no way to be sure because it's all handled inside the driver), or you can go right to the metal and use vertex shaders directly.

They allow you to specifically code to the features of the card itself, using your own specialized code that uses the T&L engines and whatever else the card has to offer to the best of your advantage.

In fact both nVidia and ATI offer this feature in their current crop of cards.


Unfortunately, the way to address vertex shaders isn't consistent across cards.

You can't just write code once for vertex shaders and have it run on any card as you can with OpenGL or DirectX, which is bad news.

However, since you are talking directly to the metal of the card, it does offer the most promise for fast rendering of the effects that vertex shaders make possible.

(As well as creating clever special effects too--you can affect things using vertex shaders in ways an API just doesn't offer you).

In fact vertex shaders are really bringing 3D graphics cards back to the way that consoles are coded, with direct access to the hardware, and knowledge necessary of the best way to get the most out of the system, rather than relying on APIs to do it all for you.

For some programmers this will be a bit of a coding shock, but it's the price of progress.


To clarify further, vertex shaders are programs or routines to calculate and perform effects on vertexes before submitting them to the card to render.

You could do such things in software on the main CPU, or use vertex shaders on the card.

Transforming a mesh for an animated model is a prime candidate for a vertex program.


Pixel shaders are routines that you write that are performed per pixel when the texture is rendered.

Effectively you are subverting the blend mode operations that the card would normally do in hardware with your new routine.

This allows you to do some very clever pixel effects, like making textures in the distance out of focus, adding heat haze, and creating internal reflection for water effects to mention just a few possibilities.


Once ATI and nVidia can actually agree on pixel shader versioning (and DX9's new higher-level shading language will help further this cause), I wouldn't be at all surprised to see DirectX and OpenGL go the way of Glide--helpful to get started with, but ultimately not the best way to get the best out of any card.

I know I will be watching the coming years with interest.




---------- Post added at 10:44 AM ---------- Previous post was at 10:41 AM ----------

In Closing...

Ultimately the renderer is where the game programmer gets judged most heavily.

Visual prettiness counts for a lot in this business, so it pays to know what you're doing.

One of the worst aspects for renderer programmers is the speed at which the 3D card industry changes.

One day you are trying to get images with transparency working correctly; the next day nVidia is doing presentations on vertex shader programming.

It moves very quickly, and for the most part, code written four years ago for 3D cards of that era is now obsolete, and needs to be completely reworked.

Even John Carmack has made mention of how he knows that the cool stuff he coded four years ago to get the most out of 3D cards at that time is now commonplace these days -- hence his desire to completely rework the renderer for each new project id produces.

Tim Sweeney of Epic agrees--here's a comment he made to me late last year:
We've spent a good 9 months replacing all the rendering code.

The original Unreal was designed for software rendering and later extended to hardware rendering.

The next-gen engine is designed for GeForce and better graphics cards, and has 100X higher polygon throughput than Unreal Tournament.


This requires a wholesale replacement of the renderer.

Fortunately, the engine is modular enough that we've been able to keep the rest of the engine -- editor, physics, AI, networking -- intact, though we've been improving them in many ways



---------- Post added at 10:44 AM ---------- Previous post was at 10:44 AM ----------

Sidebar: APIs--A Blessing and a Curse

So what is an API? It's an Application Programming Interface, which presents a consistent front end to an inconsistent backend.

For example, pretty much every 3D card out there has differences in how it implements its 3D-ness.

However, they all present a consistent front end to the end user or programmer, so they know that the code they write for 3D card X will give the same results on 3D card Y.

Well, that's the theory anyway.

About three years ago this might have been a fairly true statement, but since then things have changed in 3D card land, with nVidia leading the charge.


[

Right now in PC land, unless you are planning on building your own software rasterizer, where you use the CPU to draw all your sprites, polygons, and particles -- and people still do this.

Age of Empires II: Age of Kings has an excellent software renderer, as does Unreal -- then you are going to be using one of two possible graphical APIs, OpenGL or DirectX.

OpenGL is a truly cross-platform API (software written for this API will work on Linux, Windows and the MacOS), and has been around for more than a few years, is well understood, but is also beginning to show its age around the edges.

Until about four years ago the definition of an OpenGL driver feature set was what all the card manufacturers were working towards.

However, once that was achieved, there was no predefined roadmap of features to work towards, which is when all the card developers started to diverge in their feature set, using OpenGL extensions.


3dfx created the T-Buffer.

nVidia went for hardware Transform and Lighting.

Matrox went for bump mapping.

And so on.

My earlier statement, "things have changed in 3D card land over the past few years" is putting it mildly.


Anyway, the other possible API of choice is DirectX.

This is ]Microsoft[/URL]-controlled, and is supported purely on the PC and Xbox.

No Apple or Linux versions exist for this for obvious reasons.

Because Microsoft has control of DirectX, it tends to be better integrated into Windows in general.


The basic difference between OpenGL and DirectX is that the former is owned by the 'community' and the latter by Microsoft.

If you want DirectX to support a new feature for your 3D card, then you need to lobby Microsoft, hopefully get your wish, and wait for a new release version of DirectX.

With OpenGL, since the card manufacturer supplies the driver for the 3D card, you can get access to the new features of the card immediately via OpenGL extensions.

This is OK, but as a game developer, you can't rely on them being widespread when you code your game.

And while they may speed up your game 50%, but you can't require someone to have a GeForce 3 to run your game.

Well, you can, but it's a pretty silly idea if you want to be in the business next year.


This is a vast simplification of the issue, and there are all sorts of exceptions to what I've described, but the general idea here is pretty solid.

With DirectX you tend to know exactly what you can get out of a card at any given time, since if a feature isn't available to you, DirectX will simulate it in software (not always a good thing either, because it can sometimes be dog slow, but that's another discussion).

With OpenGL you get to go more to the guts of the card, but the trade off is uncertainty as to which exact guts will be there.




---------- Post added at 10:45 AM ---------- Previous post was at 10:44 AM ----------

بقیه اش باشه بعدا خسته شدم

19:


20:

این قسمتو من ترجمه کردم دیگه اگه مشکل پوشکل زیاد داره شرمنده

علاوه بر triangle ها patches حالا معمولا بیشتر مورد هستفاده برنامه میگیرن ! patches ( اسم دیگه higher-order surfaces ) خیلی عالی هستن چون توان توصیف geometry ( معمولا geometry شامل یه سری منحنی ) رو با یه بیان ریاضی سریع تر از لیست کردن تیکه های polygon ها و موقعیت قرارگیریشون تو محیط بازی دارن .

از این روش شما میتونید polygon هاتون رو دقیقا بسازید ( و فرم بدید ) , و تصمیم بگیرید دقیقا چه مقدار از polygon هارو از patch میخواهید ببینید .

میتونید برای مثال یه pipe توصیف کنید, و بعدش تعداد زیادی نمونه از این pipe رو تو محیط داشته باشید .

تو بعضی از اتاق ها جایی که شما قبلا 10,000 تا از polygon ها رو نمایش دادید میتونید بگید OK این pipe فقط میتونه 100 تا polygons داشته باشه ! چون ما قبلا تعداد بسیار بسیار زیادی از polygon ها رو نمایش دادیم و هرچقدر که بیشتر بشه میتونه فریم ریتم رو کاهش بده .

اما تو اتاق دیگه ای جایی که فقط 5,000 تا polygon توی دید داریم, شما میتونید بگید, حالا این pipe میتونه 500 تا polygon داشته باشه ! چون هنوز به بودجه نمایش polygon ها برای این فریم نرسیدیم ( فکر کنم منظورش اینه که هنوز جا داریم واسه اضافه کردن Polygon ها !!! ) لوازم خیلی عالی هست , اما بعدش شما مجبورید در درجه اول همه اینها رو رمزگشایی کنید و مش ها رو ایجاد کنید, و اینکار بیهوده نیست ! اینجا یه صرفه جویی واقعی از فرستادن معادلات patch's سرتاسر AGP در برابر فرستادن boatloads از ورتکس های شکل دهنده آبجکت های یکسان صورت میگیره ! SOF2 با یک تنوع در این روش جهت ساخت سیستم terrain مورد هستفاده برنامه میگیره ...




در واقع ATI حالا TruForm رو داره, که میتونه یه مدل triangle-based رو بگیره و این مدلو برپايه higher-order surfaces برای smooth کردنشون تبدیل کنه .

و سپس دوباره تبدیلشون کنه به یک higher triangle-count model ( به صورت retesselation خطاب میشه ) بنابراین سپس مدل برای ادامه پردازش pipeline رو خارج میکنه .

در واقع ATI دقیقا قبل از T&L موتورشون یه مرحله رو اضافه کردن که این پردازش رو کنترل کنه.

اشکال اینجا کنترل هست که چی اسموت بگیره و چی نگیره ! معمولا برخی از لبه های سخت رو که میخواهید بزارید , مثل دماغ ممکنه نا مناسب و ناجور smooth بشه , هنوز این یه تکنولوژی هوشمندانه هست و من میتونم ببینم که در آینده بیشتر از این تکنولوژی هستفاده میشه ...

این قسمت اول بود --- ما تو آینده توضیحات مقدماتی متریال تو پارت دومو ادامه میدیم با نور پردازی و تکسچرینگ محیط, و بعدش وارد قسمت های عمیق تر بعدی میشیم ...


21:


22:

How your character models look on screen, and how easy they are to build, texture, and animate can be critical to the 'suspension of disbelief' factor that the current crop of games try to accomplish.

Character modeling systems have become increasingly sophisticated, with higher polygon count models, and cooler and cleverer ways to make the model move on screen.


These days you need a skeletal modeling system with bone and mesh level of detail, individual vertex bone weighting, bone animation overrides, and angle overrides just to stay in the race.

And that doesn't even begin to cover some of the cooler things you can do, like animation blending, bone Inverse Kinematics, and individual bone constraints, along with photo realistic texturing.

The list can go on and on.

But really, after dropping all this very 'in' jargon, what are we really talking about here? Let's find out.


[


To begin, let's define a mesh based system and its opposite, a skeletal animation system.

With a mesh based system, for every frame of an animation, you define the position in the world of every point within the model mesh.

Let's say, for instance, that you have a hand model containing 200 polygons, with 300 vertices (note that there usually isn't a 3-to-1 relationship between vertices and polygons, because lots of polygons often share vertices--and using strips and fans, you can drastically reduce your vertex count).

If you have 10 frames of animation, then for each frame you have the data for the location of 300 vertices in memory.

300 x 10 = 3000 vertices made up of x, y, z, and color/alpha info for each vertex.

You can see how this adds up real fast.

Quake I, II, and III all shipped with this system, which does allow for the ability to deform a mesh on the fly, like making skirts flap, or hair wave.


In contrast, with a skeletal animation system, the mesh is a skeleton made up of bones (the things you animate).

The mesh vertices relate to the bones themselves, so instead of the mesh representing each and every vertex position in the world, they are all positioned relative to the bones in the model.

Thus, if you move the bone, the position of the vertices that make up the polygons changes too.

This means you only have to animate the skeleton, which is typically about 50 bones or so--obviously a huge saving in memory.



23:

سرعت اینترنتم در حد تیم ملی اومده پایین وگرنه همه رو امروز میزارشتم


76 out of 100 based on 46 user ratings 1096 reviews