[Solved]: Why is the object destructor paradigm in garbage collected languages pervasively absent?

Problem Detail: Looking for insight into decisions around garbage collected language design. Perhaps a language expert could enlighten me? I come from a C++ background, so this area is baffling to me. It seems nearly all modern garbage collected languages with OOPy object support like Ruby, Javascript/ES6/ES7, Actionscript, Lua, etc. completely omit the destructor/finalize paradigm. Python seems to be the only one with its class __del__() method. Why is this? Are there functional/theoretical limitations within languages with automatic garbage collection which prevent effective implementations of a destructor/finalize method on objects? I find it extremely lacking that these languages consider memory as the only resource worth managing. What about sockets, file handles, application states? Without the ability to implement custom logic to clean up non-memory resources and states on object finalization, I’m required to litter my application with custom myObject.destroy() style calls, placing the cleanup logic outside my “class”, breaking attempted encapsulation, and relegating my application to resource leaks due to human error rather than automatically being handled by the gc. What are the language design decisions which lead to these languages not having any way to execute custom logic on object disposal? I have to imagine there is a good reason. I’d like to better understand the technical and theoretical decisions that resulted in these languages not having support for object destruction/finalization. Update: Perhaps a better way to phrase my question: Why would a language have the built-in concept of object instances with class or class-like structures along with custom instantiation (constructors), yet completely omit the destruction/finalize functionality? Languages which offer automatic garbage collection seem to be prime candidates to support object destruction/finalization as they know with 100% certainty when an object is no longer in use. Yet most of those languages do not support it. I don’t think it’s a case where the destructor may never get called, as that would be a core memory leak, which gcs are designed to avoid. I could see a possible argument being that the destructor/finalizer may not get called until some indeterminate time in the future, but that didn’t stop Java or Python from supporting the functionality. What are the core language design reasons to not support any form of object finalization?

Asked By : dbcb

Answered By : kdbanman

The pattern you’re talking about, where objects know how to clean their resources up, falls into three relevant categories. Let’s not conflate destructors with finalizers – only one is related to garbage collection:

  • The finalizer pattern: cleanup method declared automatically, defined by programmer, called automatically. Finalizers are called automatically before deallocation by a garbage collector. The term applies if the garbage collection algorithm employed can determine object life cycles.
  • The destructor pattern: cleanup method declared automatically, defined by programmer, called automatically only sometimes. Destructors can be called automatically for stack-allocated objects (because object lifetime is deterministic), but must be explicitly called on all possible execution paths for heap-allocated objects (because object lifetime is nondeterministic).
  • The disposer pattern: cleanup method declared, defined, and called by programmer. Programmers make a disposal method and call it themselves – this is where your custom myObject.destroy() method falls. If disposal is absolutely required, then disposers must be called on all possible execution paths.

Finalizers are the droids you’re looking for. The finalizer pattern (the pattern your question is asking about) is the mechanism for associating objects with system resources (sockets, file descriptors, etc.) for mutual reclamation by a garbage collector. But, finalizers are fundamentally at the mercy of the garbage collection algorithm in use. Consider this assumption of yours:

Languages which offer automatic garbage collection … know with 100% certainty when an object is no longer in use.

Technically false (thank you, @babou). Garbage collection is fundamentally about memory, not objects. If or when a collection algorithm realizes an object’s memory is no longer in use depends on the algorithm and (possibly) how your objects refer to each other. Let’s talk about two types of runtime garbage collectors. There are lots of ways to alter and augment these to basic techniques:

  1. Tracing GC. These trace memory, not objects. Unless augmented to do so, they don’t maintain back references to objects from memory. Unless augmented, these GCs won’t know when an object can be finalized, even if they know when its memory is unreachable. Hence, finalizer calls aren’t guaranteed.
  2. Reference Counting GC. These use objects to track memory. They model object reachability with a directed graph of references. If there is a cycle in your object reference graph, then all objects in the cycle will never have their finalizer called (until program termination, obviously). Again, finalizer calls are not guaranteed.

TLDR

Garbage collection is difficult and diverse. A finalizer call cannot be guaranteed before program termination.

Best Answer from StackOverflow

Question Source : http://cs.stackexchange.com/questions/37462

Leave a Reply