Ahh ok, I'm familiar with opening and closing files and understand the concept of calls, I get it now. I've haven't done any programming for over 20 years, (and I was never much good at it anyway), so I'm completely unfamiliar with the terminology of "modern" languages. One thing I've never really understood about a lot of languages is why they need to be so "abstract". I've often thought that they're purposely designed to be obtuse just to keep programming "elite". Whatever happened to the goal of a high level language that could just be programmed in plain language? Back in the days I remember talk of programming eventually getting to the level where you could almost write in plain english what you wanted to do and the computer would translate that in to it's own internal machine code and do it. Seems to be going the other way to me. The fact that developers are forever trying to reinvent the wheel doesn't help either IMHO. A multitude of languages each with their own syntax and terminology to achieve exactly the same thing.
I'd do it the first way too. Like you say just seems easier to read, (though I've no idea if it makes any difference to the way they run). Just be thankful it doesn't care how many spaces there are between the start of the line and the code !! Fortran was a real pig for that
Whenever I write personal code (I use the first style and conform with the Java coding convention in terms of indentation (tab equal to 4 spaces in width), but at work we use spaces only (no tabs).
/** * This is the command line version of the SpaceWaster application. <filepath> <number of runs> * e.g. E:/ 1 * @author Gus * */ public class SpaceWasterCommandLine { public static void main(String[] args) { // Program has been run from batch file, display usage information if(args.length == 1 && args[0].equals("BAT")) { System.out.println("Usage: <filepath> <number of runs> e.g. E:/ 1"); System.out.println("\nPlease input jobs..."); String input = new Scanner(System.in).nextLine();
// initially create all the jobs then run them for(int i=0; i<=args.length/2; i+=2) { String rootPath = args[i]; int numRuns = Integer.parseInt(args[i+1]);
System.out.println("Adding job with rootPath "+rootPath+" and numRuns "+numRuns);
I'd say they are that way to make programming more accessible. Higher level languages, just like assembly and FORTRAN try to abstract away the details of how a computer works so you can write code quickly and easily without having to worry about the down and dirty details.
You use file abstractions to access the disk instead of keeping track of file systems, platters, cylinders, sectors etc. and writing the data manually. You use graphics APIs instead of writing a command buffer native to the GPU and send that across the PCIe bus. You write machine assembly instead of having to write binary opcodes using 1s and 0s. It's just easier, and the more complex applications get (because of user expectations), the more layers of abstraction are needed to keep thing at a manageable complexity level. We usually only have to worry about the top level though, Meaning it's easier for out pathetic human brains to cope. These languages simply let you describe what you want done, not how you want the computer to do it (which, let's face it, you don't really care about).
I know what you mean about the "write code in English" bit though, and there are languages that will let you do something close to that. BASIC has been around forever but some popular incarnations are getting quite long in the tooth these days. Ruby is one of the newer languages that try to stay fairly intuitive, so there's alternatives that are FAR better than C++ in this respect.
They're still not anything like real English though, and that's simply because English is a spectacularly bad language for instructing a computer. It's just too vague and ambiguous and requires extraordinary amounts of logic and intelligence to make head and tails of. It's better/easier to teach a human how to communicate in a way a computer can understand (we're good at that sort of stuff) than it is to teach a computer how to understand a human.
EDIT: Just wanted to add one of my favourite quotes on the subject which pretty much sum up high level languages to me: "Being abstract is something profoundly different from being vague... The purpose of abstraction is not to be vague, but to create a new semantic level in which one can be absolutely precise." - E. Dijkstra
Thinking about it that way I suppose it's true, even relatively low level languages are actually abstractions of what is really going on in the hardware as far as bits of data etc are concerned.
I just remember that when I first heard of the use of libraries as a way of calling specific functions I thought "at last" simple programming. Coding will just be a case of stringing together the calls in the order you wanted to execute them without needing or caring to know how the computer actually achieves it. Take an imaginary file handling call for example:
write.file(file.name)
would take car of all the opening and writing too and then releasing of the file with no worries about needing any bits of code to come around later and close it etc. How wrong I was !
Though I guess if you open/write/close a file for every bit of data and you know you're going to be writing more data to that file it's maybe a bit of an overhead to have to keep opening and closing it, (from the file system access speeds on the hard drive point of view i mean).
btw - do people still use subroutines any more, or are they a thing of the past?
Exactly. Even programmable processors are just an abstraction made because soldering all those ORs, ANDs and NANDs together got tiresome when trying to perform complex tasks.
Yep. Speed is a huge part of it. An important aspect of any good abstraction is to enable the computer to work as efficiently as possible underneath it. It's also simply more efficient and less error prone to specify what file (filename) you want to work on once instead of specifying it at every write call. You have to weigh these things against one another really. Usability vs. efficiency.
I wouldn't say Ruby is too far away from your example though. Take for instance this snippet which copies text from one file to another:
It's not English, but it's pretty understandable if you read the code out loud.
Most certainly. Though they're usually called "functions", or if they're organized inside classes; "methods"/"member-functions". Depends on who you talk to though. They're the core building blocks of any program really. Without them programming would be a messy task.
Its just the way you said you could not nunit the UI. All you need is tests up to the controller. The view should have zero code of worth or complexity if done right and you can drive and test everything through the controller
That said, it took my company quiet a while to bite the bullet and use tests. Months of kicking and fighting and it has push out or timelines a HUGE amount. It has already paid for itself though.
It did take a good 6 months to find our feet though and get a solid frame in place
Sounds more like Object C, the language that was the stepping stone between C and C++
Makes sense TBH. As soon as you start to add virtual routing and the clutter that true OO in C++ brings you are just adding complexity that is not required when speed is the key, as is the case with LFS
1 letter variables I remember those days
char c = ...
for (int i = ...
Those 2 are a habbit I still find hard to shake 20 years later lol
Scawen, I understand perfectly how comfy feels being the only one that touches your code and how reluctant you can be when it comes to adding people to your project - been there, felt that. BUT...if you spent some time writing a guide on how YOU want things to be written, dont you think some people could help with the less important/repetitive parts of the program?
I still havent stumbled a project where other people cant give a hand, at least on certain areas where everything is already thought out.
Maybe using automated tests to validate the 'outsider' code could work out.
And this isnt another "ADD MORE PEOPLE TO SPEED UP LFS" post, its just that since you may read this, im really curious about the answer
Why do you need to find new and inventive ways to suggest adding more developers? I'm confident Scawen has thought all this through himself, many times over, but still he doesn't want more cooks in the kitchen. That should tell you all you need to know.
More developers means more communication needed. There are just no ways around that.
There are no issues with automatic garbage collection in a game environment. Just fire it off every vertical blank *shrug*, re: file handles, just create a new handle - let the GC close the old handle when it gets around to it (next vertical blank), it doesnt matter if the file is open twice, Windows can handle it - infact, you're guaranteed it's in the HD cache, possibly L2 cache.
*shrug* I dont see the problem with having automatic GC. I use OO quite a lot these days, I let my compiler handle memory clean up when it deconstructs an object. All I do is ocassionally point handles to null and the GC does the rest, I fire the GC off every time I flip the display buffer.
It's not bad habbit, it's good convenience and rappid development.
Only thing that caught me out recently was a php script (which has fully automatic GC, you need to point your $handles to null to deconstruct). It wasnt the end of the world when the script failed (fail safes and recoveries are needed on any mission critical system anyway). It was no biggy to fix, just had to point some recurrsive stuff to null and RAM use shrank right down again, re-ran the script - failsafes allowed it to auto-recover...
i hate GC. automation of memory management when you are struggling to keep absolute control is not welcome in my book.
as for the initial question, i don't think it matters how many KB is the source code, but it matters what are those KB of code. Anyone can write 500 files with 6MB of code.
scawen managed to fit in that space, one of the best racing simulators ever
Umm, and if you open it for writing both times? The second time will fail unless I've gone senile since the last time I tried that. And what about mutex and database locks? Locks of any kind really. Letting the GC handle those would be silly.
I don't find a GC any more convenient nor more "rapid" than using RAII. manually making sure I clean up resources at the right time no matter how the program flow goes is a pain in the backside and I don't want to do it. Just a shame no "modern" languages support RAII properly.
EDIT: Consider this example with a function locking some resource before doing some work that may fail with an exception:
Some time ago I was working on a large Java GUI project (100 thousands lines of code). The kind of GUI in which multiple users interact with huge tables and charts refreshed in real time.
We discovered that the simple countless objects generated during the refresh process, one rectangle per cell update, then dirty rectangles are merged and clipped according to documents frames....were causing the garbage collector to use 100% CPU on any average PC with only a few documents open (even with documents minimized!!!). This was not a design problem from our side, we had no control on these objects because they were generated inside Swing. As soon as we used native components directly instead of swing, the CPU load fell from 100% to 5% for the same documents.
These rectangles objects were used only in one function and should have been destroyed when the function returns, but the Java too general collecting process caused them to go through the entire cycle...
In C# this is different because simple objects like structs are allocated on the heap and cleaned when the function returns. But it is very easy to have problems with Java. It is not flexible enough in it's allocation mechanism and does not give you the choice. If you use Java in a game, you have to be prepared to spend some time implementing objects pooling and cleaning strategies.
Techland used to develop games in Java. Chrome engine, Expand Rally, Call of Juarez are written in Java
I once worked as a support engineer on a mainframe system that had approximately 8 million lines of code. It took 12 of us to support it and there were another 8 developers. It was written using a database known as Adabas in a language called Natural. All command driven. Any scripts we ran against the system had to be run by the operations department made up of another 8 support people.
Crazy really when you think about it. Mind you, this particular company who shall remain nameless, had such a plethora of systems and languages that there were 250 of us in total in the IT department.
@Wien: I think we are thinking of different concepts. Personally I dont consider closing a resource to be part of garbage collection.
To me GC is memory cleanup, that is, the removal of dead vars and memory allocated to objects i'm finished with.
Leaving a resource open is like burgling a house and leaving your finger prints everwhere and stopping infront of the nearest CCTV camera with your loot and using your reflection in the lense to tidy your hair ... you're going to get caught, but this is a fundamentally different concept in my mind to memory management.
tight, carefuly laid out code, doesn't need automatic cleanup. the code you write does the housekeeping on its own. but that's just my control-freak self who has spent too many nights coding assembler on calculators :hide:
i wouldnt dare to actually try it but i rather suspect that gc would cause complete chaos with the scientific software i use and occasionally write which can fill up several gigs of ram with useful data in no time