What's left for the Source engine?

Like someone earlier posted, we are nothing more than chemical machines.

The logical step you all miss is that nobody has any idea how we work, and yet they gleefully assume we can reproduce the effect within their lifetime in something else.

Tell me how sleep works and I'll believe people have a shot at understanding their own sentience someday.
 
Originally posted by nbk-brando
Read an interview here:
http://www.3dm-mc.com/features.php?id=19

See some of his other (similar) work here:
http://www.renderosity.com/gallery.ez?ByArtist=Y&Artist=Julian_J

And, yes, those are all strictly CG as well.

Nevermind. You are obviously a fanboy, led by blind faith. I don't feel like arguing with you anymore.

"I have been trying to improve this head since I last posted it, this is not a straight render from Maya, I spent ages in Photoshop with it playing about with the levels and colours and making some areas darker or lighter"
http://www.renderosity.com/viewed.ez?galleryid=225554&Start=13&Artist=Julian_J&ByArtist=Yes

Ahh well that explains it. It was post manipulated. The 3D model would not stand up to scrutiny, as I already stated. And Julian even says this himself.
 
The developers that are going to get the most of the source engine as it is now are going to be the ones not asking what features are next but what kinds of tricks can be pulled with the ones we have now.

Of course until we get the dev tools thats pretty hard to say.
 
Originally posted by Boogaleeboo
The logical step you all miss is that nobody has any idea how we work, and yet they gleefully assume we can reproduce the effect within their lifetime in something else.

Tell me how sleep works and I'll believe people have a shot at understanding their own sentience someday.


Who cares if we understand it, we need only to accurately scan it and recreate the connections to produce a working, thinking brain. It's like making a photocopy: you know how the photocopier works, you know something about what's on the original paper, but you don't know the location of every spotch of toner that the photocopier places on the blank sheet, and the final copy is pretty damn good.
 
Originally posted by MadsMan
Nevermind. You are obviously a fanboy, led by blind faith. I don't feel like arguing with you anymore.

"I have been trying to improve this head since I last posted it, this is not a straight render from Maya, I spent ages in Photoshop with it playing about with the levels and colours and making some areas darker or lighter"
http://www.renderosity.com/viewed.ez?galleryid=225554&Start=13&Artist=Julian_J&ByArtist=Yes

Ahh well that explains it. It was post manipulated. The 3D model would not stand up to scrutiny, as I already stated. And Julian even says this himself.

As he said, LEVELS AND COLOURS. Does that compute? It means exactly how it sounds - light and dark.

So what you're saying is, since he adjusted the levels and colours...it's poor work? Oh, wait...I remember:

The 3D model would not stand up to scrutiny

Maybe to some Half Life 2 forum browser that has never had any 3d experience, but just maybe. Any amatuer or professional 3d artist can spot that his model was well put together, what else you got?


Oh, and, Fanboy? Try collegue. I'm an aspiring 3D artist, and Julian here, has set the bar. Try to find any freelance 3D artist that touches this talent. Reward: one cookie.

If you'd like to see another one of his models/renders with no Photoshop, see here:

http://www.renderosity.com/viewed.ez?galleryid=299555&Start=1&Artist=Julian_J&ByArtist=Yes

PM me when you get a clue.
 
Yeah, I feel bad about that...sorry guys! :( :(

Let me just reiterate my last on-topic post - seems like it could spark some good discussion:

Originally posted by nbk-brando
A mosquito can most definitely learn, just not on the level you appear to be referencing. An article I read in Scientific American gave a projected timeline for AI in the world, and it's end was 50+ years off. The relevance? The article suggested that creating an AI with the capabilities of a fruit fly is around 5-10 years off. Just because something is a tiny, minute, organizm, does not mean it is not without computational complicities.



I really hope your not talking about programs like Dragon Speak. Yes, there are several programs, in several fields and applications, that can "learn". But this is a scripted, programmed, learn. In other words, the code would be a scripted, albiet monstrous, sting of variables:

  • Food Preference

    1. If user asks for "Chocolate", write preference, time, and amount to database and average with previous results.

    2. Next time user mentions food keyword, corrolate data and suggest common preference.

The key to "real" artificial intellegence is reason. This is the difference between a robot saying, "Master prefers to eat chocolate in the evening." and "Master likes to eat chocolate in the evening, it seems to make him happy." (Without scripted events)

The current group of top scientists and contributors project we may see this level of intellegence within 50 years, but it may be longer.
 
Originally posted by nbk-brando
As he said, LEVELS AND COLOURS. Does that compute? It means exactly how it sounds - light and dark.

So what you're saying is, since he adjusted the levels and colours...it's poor work? Oh, wait...I remember:



Maybe to some Half Life 2 forum browser that has never had any 3d experience, but just maybe. Any amatuer or professional 3d artist can spot that his model was well put together, what else you got?


Oh, and, Fanboy? Try collegue. I'm an aspiring 3D artist, and Julian here, has set the bar. Try to find any freelance 3D artist that touches this talent. Reward: one cookie.

If you'd like to see another one of his models/renders with no Photoshop, see here:

http://www.renderosity.com/viewed.ez?galleryid=299555&Start=1&Artist=Julian_J&ByArtist=Yes

PM me when you get a clue.

Uhh, you can argue all you want on this, it comes down to matter of opinion, bub. And yes, I am a seasoned Artist. Thanks for assuming otherwise.
 
Yeah, they killed the thread. Whoever started it should make a new one in GD about ethics and computer AI. Hell, anybody could- but I have to leave now bye.
 
Hint? please stfu.

Anyway, I was the one that started the whole AI thing in the first place. Then this twit gets all defensive when I don't agree about his idol's work. Fanboy troll.

Anyway, I was also thinking of the day we get sensory input/output. For instance, smell-o-vision? Would actually smelling the atmoshpere and objects in a game world enhance the experience more? And what about taste, or touch? I once thought about a full body suit with tiny nodes that sent small bits of electricity to the entire body. Kinda like a forcefeedback body suit. Could be cool....

I think eventually we may even get to the "holodeck" stage in home entertainment, but that is probably a very long way off from now. :)
 
What a pointless argument. Madsman go and do some reading on the latest rendering advancements that have been made.

You sir, are an idiot.
 
Originally posted by Seikeden
What a pointless argument. Madsman go and do some reading on the latest rendering advancements that have been made.

You sir, are an idiot.

Hear, hear.

You'd think any HL2 fan would have an inkling of an idea what is out there. If we can do HDR and DOF on the fly, what can we do with a 12+ hour render? Someone needs to do their homework.

Or maybe I'm just another "ignorant troll". :rolleyes:
 
I really wouldnt want to see liquid blood oozing. I'm not really the gore gore gore type. In an email to gabe, someone asked him if you had a crate with water in it and then shot (breaking) the wood, would the water drain out. Im 97% sure gabe said no.
 
Face facts, you are not going to be pleased.

That is called wisdom, you have experienced the FPS genre to the point of bordom. I can't wait for H-L2 but I do at least know why all games coming out today suck, because I've 'been there done that' but to an 8 year old or what have you who has never played a game like a FPS that game is going to be totally original and new and exciting.

So are you going to make something of your new-found wisdom? (using the term loosly :p )
 
In MadMan's defense, Julian (or whatever his name is) says that to get his results, he renders his scenes in layers then composites them in Photoshop with some tweaking to get everything to gel. While he is an undeniably talented artist and 3D modeler, the fact is that according to him, no single program can give the results you are seeing.
 
Originally posted by Mountain Man
In MadMan's defense, Julian (or whatever his name is) says that to get his results, he renders his scenes in layers then composites them in Photoshop with some tweaking to get everything to gel. While he is an undeniably talented artist and 3D modeler, the fact is that according to him, no single program can give the results you are seeing.

Yes, thank you Mountain Man, this is what I was trying to communicate. I never said Julian wasn't talented, but these trolls seem to take it that way, throwing insults and trying to pass the blame to me by calling me an "idiot". When really, they haven't the slightest clue as to what they are talking about. People like that really piss me off. :flame:
 
Originally posted by MadsMan
Yes, thank you Mountain Man, this is what I was trying to communicate. I never said Julian wasn't talented, but these trolls seem to take it that way, throwing insults and trying to pass the blame to me by calling me an "idiot". When really, they haven't the slightest clue as to what they are talking about. People like that really piss me off. :flame:

First off, you were the first to start with the name calling. It was at this point I realized I was argueing with someone not mature enough to have a logical, intelligent discussion. (Read my posts, I never called you any names.)

Secondly, I know he uses composites. They are simply to get the right levels of light, saturation, and color. (Again, as I've said several times.)

Next, Madsman, maybe you need to work on your communication skills. You, rather blatently, claimed that his 3d work would not stand up under scruteny. Now you appear to not remember this, let me jog your memory.

Note: I really didn't want to dig into for these, but you leave me no choice. :)

Here it goes:

I have seen this kind of trick before, and it's most likely a model that he cannot turn 360, otherwise the illusion is lost. I do believe the eyes and hair are 3d, but the rest would not stand up to scrutiny.

Ahem. You are correct, this usually happens with laserscanning models. Usually there is a lot of cleaning up to do. But, considering our Julian in question is 28 and middle class, I doubt he would have the thousands of dollars to hire a company that leases this service in order to scan a random boy's face for the use in a non-commercial environment.

And if this isn't your arguement, why would he go through the process of creating a 3d model that only looked good from one, very accurate angle of pose and camera angle? That totally defeats the purpose of 3d modelling. Think about it.

Now continue:

And why does his shoulders look blocky in the render, and nice and smoothed out in the viewport. just doesn't make sense.

For someone with the intense knowledge you claim to have, you sure do make some rather elementary impulse judgements. Any amatuer can see that the second viewport is a flat render, probabely a straight screenshot from Maya. When this happens, you get to see all of the object vector placeholders, such as lighting, guides, bones, space warps, etc. You also see the horizon plan marker. (The grey grid)

What looks to be a taper modifier gizmo, coupled with the horizon plane, is creating an optical illusion that seem to blend in the poly's on the boy's shoulders. If you look closely, you can see the vertexes...pretty easily I might add.

Also the shaded version of the model has some inconsistencies in lighting compared to the full render. As if the full render has a photo-map projected over the geometry, thereby inheriting it's lighting.

Hehe...are you sure you are a "seasoned artist"? Again, the second viewport is, almost, strictly diagramatical. It basically shows lighting representation - where the light intersects the model. It is by no means a representation of accurate intensity upon final render. Again, most entry-level 3d artists know this.

if you look in the shaded viewport, you will also see that the backgroundis pre-lit, which could also support what I am saying....

The background is a photo that was edited in Photoshop. He, rather convincingly, setup his lighting to mimic the surroundings of the background. Pretty standard stuff.

But hell, I could be wrong...

Now you're getting it.

Fin.
 
Ok, I obviously need to clear some shit up, cause you misinterpreted everything I said. (and you also like to beat dead trees I see)lol


Originally posted by nbk-brando
First off, you were the first to start with the name calling. It was at this point I realized I was argueing with someone not mature enough to have a logical, intelligent discussion. (Read my posts, I never called you any names.)

Secondly, I know he uses composites. They are simply to get the right levels of light, saturation, and color. (Again, as I've said several times.)

Umm, no where in your threads prior to mine did you speak about compositing. I was actually the first to mention compositing. Stop trying to claim you said all this crap, when it's obvious that you didn't. My arguement was that his model could not be animated, and spun around and still maintain that level of realism in a straight render. Simply because to achieve that realism, he had to take it out to secondary 2D app, and fine tune it over many hours. So yes, as a still image, it holds up nicely.



Next, Madsman, maybe you need to work on your communication skills. You, rather blatently, claimed that his 3d work would not stand up under scruteny. Now you appear to not remember this, let me jog your memory.



When I first challenged the images validity, I did so as a solely 3D render, with no post manipulation. It wasn't until later on that I read Juilan's comments on his model, stating that it was infact NOT a straight render, and had to be extensively worked on in photoshop to achieve the results. So therefore, it WOULD NOT stand up to 3d scrutiny. I was correct in this statement, as he could not animate that model and have it look as photo-real as he does in that single still frame.

However, the other two links you showed me WOULD stand up to 3D Scrutiny.
http://www.beans-magic.com/new-cg4.html
http://www.halflife2.net/forums/att...mp;postid=52522


Note: I really didn't want to dig into for these, but you leave me no choice. :)

Could have fooled me. You seem to enjoy making an ass out of yourself.




Ahem. You are correct, this usually happens with laserscanning models. Usually there is a lot of cleaning up to do. But, considering our Julian in question is 28 and middle class, I doubt he would have the thousands of dollars to hire a company that leases this service in order to scan a random boy's face for the use in a non-commercial environment.

And if this isn't your arguement, why would he go through the process of creating a 3d model that only looked good from one, very accurate angle of pose and camera angle? That totally defeats the purpose of 3d modelling. Think about it.


wtf. This makes no sense. I never stated it was a 3D scan. If you read what I said above, I was referring to the photo-real appearance of the model, not the model's 360 degree-geometry itself. I even stated in that same post about "3D scrutiny" that I was referring to projected texture maps. Do you even know what a texture map projection is? Clearly you do not, cause if you did, you would have understood what I meant. See, when you project a texture onto a model, it's basically a planar projection, and will only appear "correct" from one ange, on models with complex topology. If you rotate the model, you begin to see texture "stretching". It's easy enough to model a head to match a photo as closely as possible, then project the photo over the geometry to achieve a sort of proxy result. There have actually been many artists that did this, and it caused quite the contraversy. When I saw this image, I assumed that it was a straight render. And I corrected myself when I discovered that it was post manipulated. You should have understood the implications of this, but apparently it just didn't sink in.




For someone with the intense knowledge you claim to have, you sure do make some rather elementary impulse judgements. Any amatuer can see that the second viewport is a flat render, probabely a straight screenshot from Maya. When this happens, you get to see all of the object vector placeholders, such as lighting, guides, bones, space warps, etc. You also see the horizon plan marker. (The grey grid)


Uhhh no shit. Did you not read my post? I said "in the shaded view" duh. The shaded view is the term often used to refer to the working viewport, usually a screencap of the workspace. If I wanted to refer to the rendered image, I simply would say "the render". Get a clue. Anybody that works in 3D understands what "shaded model" means. And don't try to say "oh but models use shaders, so the render is actually the shaded view" cause "shaders" was a term that Pixar came up with for RenderMan. 3DsMax's equivalent is called "Materials".


What looks to be a taper modifier gizmo, coupled with the horizon plane, is creating an optical illusion that seem to blend in the poly's on the boy's shoulders. If you look closely, you can see the vertexes...pretty easily I might add.


Yes, I know wtf those gizmos are. First of all, he is using Maya. There is a freeform spotlight gizmo, a target spotlight, two clusters, a wrap deformer, and two locators. I admit, I was wrong about the shaded port vs the render. I have a crappy 15" mon at work. Meh.


Hehe...are you sure you are a "seasoned artist"? Again, the second viewport is, almost, strictly diagramatical. It basically shows lighting representation - where the light intersects the model. It is by no means a representation of accurate intensity upon final render. Again, most entry-level 3d artists know this.

Yes, I am quite sure. Thanks for asking again. Feel stupid yet? Keep asking, maybe you will get lucky and I will respond differently.


The background is a photo that was edited in Photoshop. He, rather convincingly, setup his lighting to mimic the surroundings of the background. Pretty standard stuff.

No really? I thought that was a portal to another dimension!! DAAAAMMMNNNN



Now you're getting it.

You obviously don't. This explains why I called you "ignorant".


I agree. This thread needs to be locked. It's nothing but filth now. My apologies.

pardon the edits, I had a lot of grammatical errors. English is not my first langauge. ^_^
 
lalala, I wasn't even talking about the boy render. His gnome guy or whatever it is shows a similar level of realism, with pictures of the texture map, as well as renders from many viewpoints. Actually I have no idea what point you were trying to make anymore, since you seem to have gone back on every point and obscured it so that it comes out as 'Julian's work uses compositing and postprocessing, and in some cases is not animatable.' Wow great.

I'm sorry, but that last one is completely faked photoshop trickery
 
Originally posted by MadsMan
Actually, HL2 does have per polygon hit detection. It's in the "valve info only" thread somewhere.

no. half-life 2 use hitboxes for sure
 
Originally posted by asd
no. half-life 2 use hitboxes for sure
Technically you are both correct.

HL2 has per-poly hit detection... but for most calculations it uses hit boxes because the extra precision is useless and just eats up CPU time.
 
Gabe has also said that their hitbox implementation is virtually indistinguishable from per-pixel collision detection.

To get the right mental image, picture high-resolution hitboxes that perfectly conform with the model.
 
Back
Top