[Solved]: Does immutability in functional programming really exist?

Problem Detail: Although I work as a programmer in my daily life and use all the trendy languages (Python, Java, C, etc) I still have no clear view of what functional programming is. From what I’ve read, one property of functionally languages is that data structures are immutable. For me this alone raises a lot of questions. But first I will write a bit of what I understand of immutability and if I’m wrong, feel free to correct me. My understanding of immutability:

  • When a program starts it has fixed data structures with fixed data
  • One can’t add new data to these structures
  • There are no variables in the code
  • You can merely “copy” from the already data or currently calculated data
  • Because of all above, immutability adds huge space complexity to a program

My questions:

  1. If data structures are supposed to remain as they are (immutable), how the hell does someone add a new item in a list?
  2. What is the point in having a program that can’t get new data? Say you have a sensor attached to your computer that wants to feed data to the program. Would that mean that we can’t store the incoming data anywhere?
  3. How is functional programming good for machine learning in that case? Since machine learning builds from the assumption of updating the program’s “perception” of things – thus storing new data.
Asked By : Pithikos

Answered By : Jake

When a program starts it has fixed data structures with fixed data

This is a bit of a misconception. It has a fixed form and a fixed set of rewrite rules but these rewrite rules can explode into something much larger. For instance the expression [1..100000000] in Haskell is represented by a very small amount of code but its normal form is massive.

One can’t add new data to these structures

Yes and no. The purely functional subset of a language like Haskell or ML can’t get data from the outside world but any language for practical programming has a mechanism for inserting data from the outside world into the purely functional subset. In Haskell this is done very carefully but in ML you can do this whenever you want.

There are no variables in the code

This is pretty much true but don’t confuse this with the idea that nothing can be named. You name useful expressions all the time and constantly reuse them. Also both ML and Haskell, every Lisp I have tried, and hybrids like Scala, all have a means of creating variables. They just are not commonly used. And again the purely functional subsets of such languages don’t have them.

You can merely “copy” from the already data or currently calculated data

You can perform calculation by reduction to normal form. The best thing to do is probably to go write programs in a functional language to see how they do in fact perform calculations. For instance “sum [1..1000]” is not a calculation I want to perform but it is quite handily done by Haskell. We gave it a small expression that had meaning to us and Haskell gave us out the corresponding number. So it definitely performs calculation.

If data structures are supposed to remain as they are (immutable), how the hell does someone add a new item in a list?

You don’t add a new item to a list, you create a new list out of the old one. Because the old one can’t be mutated it is perfectly safe to use it in the new list, or whereever else you want. Much more data can be safely shared in this schema.

What is the point in having a program that can’t get new data? Say you have a sensor attached to your computer that wants to feed data to the program. Would that mean that we can’t store the incoming data anywhere?

As far as user input goes, any practical programming language has a way of getting user input. This happens. However there is a fully functional subset of these languages that you write most of your code in and you reap the advantages in this way.

How is functional programming good for machine learning in that case? Since machine learning builds from the assumption of updating the program’s “perception” of things – thus storing new data.

This would be the case for active learning but most machine learning I have worked with (I work as a code monkey in a machine learning group and have done so for a few years) has a one time learning process where all the training data is loaded in at once. But for active learning you can’t do things 100% purely functionally. You are going to have to read in some data from the outside world.

Best Answer from StackOverflow

Question Source : http://cs.stackexchange.com/questions/37558

Leave a Reply