Feeds:
Posts
Comments

Archive for May, 2016

10 years

I started this blog on May 31, 2006. It’s been 10 years.

When I started on it, I expected to be leaving technical advice for programmers and IT people, since I had been doing that in comment forums for a while. The advice I was leaving was becoming repetitive, and rather involved. The same problems kept coming up for people. I saw a blog as a way of optimizing the process. I thought I’d just write it here once, and refer people to it. Very quickly, though, a new interest was emerging for me, which grew into re-evaluating and exploring computer science, and that’s mostly what I’ve been writing about since.

The purpose I’ve held for this blog is to share what I’ve learned as I’ve tried to understand in much more depth the perspective on computing that drew me to the field when I was young. The reason I wanted to share it is I understood quickly that if it’s going to be something I’m going to pursue down the road, it has to be more than just me who sees value in it. I wanted to bring other people along. I thought of it as a “trail of breadcrumbs” for people like myself. I don’t know if it’s had that effect. It seems more like there have been certain subjects that have drawn a large audience, but the interest is focused there, and on a few related posts, and not on the blog as a whole. That’s alright. At least it’s having some impact, as I see when people leave appreciative comments, and on occasion e-mail me to extend the conversation. This is just an impression, but it appears I’m on a path that most don’t want to follow, and that may have to do with the fact that most don’t have the luxury. Nevertheless, I will continue posting about what I learn. It helps my learning process to write about it, and it helps my writing to do it with the knowledge that others will see it.

Related posts:

My journey, Part 6

The 150th post

Read Full Post »

I’ve had it on my mind to post this for a while, though I do it with hesitation, since this has been so politicized. I wrote about the ban on incandescents in 2008, in “Say goodbye to incandescent bulbs. Say hello to the ‘new normal’.”

I found this interesting nugget as I was writing this. Back then, I had an idea for how to make CFLs safer. I don’t know why no one thought of it before I did. From a consumer safety standpoint, it would seem like an obvious solution:

[M]aybe they could put a plastic bulb around the coil that would absorb the shock, or at least contain the contents if the coil broke from jarring. This would require them to make the coils smaller, but the added safety would be worth it.

Several months ago, while shopping for bulbs at the grocery store, I noticed they were selling CFLs with a hard plastic bulb, just like I suggested! Too little, too late, I guess. I am happy to say that CFLs are going away at the end of this year.

I actually stocked up on incandescents on clearance. I haven’t been using CFLs for most of my lighting. I ended up using them for a couple overhead lights, just because an inspector came through a couple years ago to make sure the apartment complex I live in was “smart regs-compliant,” and I didn’t want to embarrass my landlord by creating any excuse to call the dwelling non-compliant. “I love my minders…”

This doesn’t mean incandescents are coming back. They’re still banned by legislation passed in 2007, but at least there are now better lighting options. Since a few years ago, those who prefer incandescents can get halogen bulbs that are the same size as the old incandescents, and are just as bright. They cost more, but they’re supposed to use less electricity, and last longer. What I care about is that they operate on the same principle. They use a filament. So if you break one, all you have to do is clean up the glass, and vacuum or mop. No toxic spill. Yay!

As the guys in the video say, there are now very good LED lighting options, which are also non-toxic to you.

So I see this as reason to celebrate. The market actually succeeded in getting rid of an expensive, hazardous form of lighting that, it seems to me, our government tried to force on us. I wish the ban would be lifted. I don’t see the point in it. I know people will say it reduces pollution, because we can lower the amount of electricity we use, but as I explained years ago, this is a dumb way to go about it. Lighting is not the major source of electricity usage. Our air conditioning is the elephant in the room, each summer!

Aaron Renn had a good article on what was really going on with the light bulb ban, in “The Return of the Monkish Virtues,” written in 2012. It’s interesting reading. The idea always was to force people to think about their impact on the environment, not to actually do anything substantial about it.

Read Full Post »

“We need to do away with the myth that computer science is about computers. Computer science is no more about computers than astronomy is about telescopes, biology is about microscopes or chemistry is about beakers and test tubes. Science is not about tools, it is about how we use them and what we find out when we do.”
— “SIGACT trying to get children excited about CS,”
by Michael R. Fellows and Ian Parberry

There have been many times where I’ve thought about writing about this over the years, but I’ve felt like I’m not sure what I’m doing yet, or what I’m learning. So I’ve held off. I feel like I’ve reached a point now where some things have become clear, and I can finally talk about them coherently.

I’d like to explain some things that I doubt most CS students are going to learn in school, but they should, drawing from my own experience.

First, some notions about science.

Science is not just a body of knowledge (though that is an important part of it). It is also venturing into the unknown, and developing the idea within yourself about what it means to really know something, and to understand that knowing something doesn’t mean you know the truth. It means you have a pretty good idea of what the reality of what you are studying is like. You are able to create a description that somehow resembles the reality of what you’re studying to such a degree that you are able to make predictions about it, and those predictions can be confirmed within some boundaries that are either known, or can be discovered, and subsequently confirmed by others without the need to revise the model. Even then, what you have is not “the truth.” What you have might be a pretty good idea for the time being. As Richard Feynman said, in science you never really know if you are right. All you can be certain of is that you are wrong. Being wrong can be very valuable, because you can learn the limits of your common sense perceptions, and knowing that, direct your efforts towards what is more accurate, what is closer to reality.

The field of computing and computer science doesn’t have this perspective. It is not science. My CS professors used to admit this openly, but my understanding is CS professors these days aren’t even aware of this distinction anymore. In other words, they don’t know what science is, but they presume that what they practice is science. CS, as it’s taught, has some value, but what I hope to introduce people to here is the notion that after they have graduated with a CS degree, they are in kindergarten, or perhaps first grade. They have learned some basics about how to read and write, but they have not learned what is really powerful about that skill. I will share some knowledge I have gleaned as someone who has ventured as a beginner in this perspective.

The first thing to understand is that an undergraduate computer science education doesn’t tend to give you any notions about what computing is. It doesn’t explore models of computing, and in many cases, it doesn’t even present much in the way of different models of programming. It presents one model of computing (the Von Neumann model), but it doesn’t venture deeply into concepts of how the model works. It provides some descriptions of entities that are supposed to make it work, but it doesn’t tie them together so that you can see the relationships; see how the thing really works.

Science is about challenging and forming models of phenomena. A more powerful notion of computing is realized by understanding that the point of computer science is not about the application of computing to problems, but about creating, criticizing, and studying the strengths and weaknesses of different models of computing systems, and that programming is a means of modeling our ideas about them. In other words, programming languages and development environments can be used to model, explore, and test, and criticize our notions about different computing system models. It behooves the computer scientist to either choose an existing computing architecture represented by an existing language, or to create a new architecture, and a new language, that is suitable to what is being attempted, so that the focus can be maintained on what is being modeled, and testing that model without a lot of distraction.

As Alan Kay has said, the “language” part of “programming language” is really a user interface. I suggest that it is a user interface to a model of computing, a computing architecture. We can call it a programming model. As such, it presents a simulated environment of what is actually being accomplished in real terms, which enables a certain way of seeing the model you are trying to construct. (As I said earlier, we can use programming (with a language) to model our notions of computing.) In other words, you can get out of dealing with the guts of a certain computing architecture, because the runtime is handling those details for you. You’re just dealing with the interface to it. In this case, you’re using, quite explicitly, the architecture embodied in the runtime, not an API, as a means to an end. The power doesn’t lie in an API. The power you’re seeking to exploit is the architecture represented by the language runtime (a computing model).

This may sound like a confusing round-about, but what I’m saying is you can use a model of computing as a means for defining a new model of computing, whatever best suits the task. I propose that, indeed, that’s what we should be doing in computer science, if we’re not working directly with hardware.

When I talk about modeling computing systems, given their complexity, I’ve come to understand that constructing them is a gradual process of building up more succinct expressions of meaning for complex processes, with a goal of maintaining flexibility, in terms of how sub-processes that are used for higher levels of meaning can be used. As a beginner, I’ve found that starting with a powerful programming language in a minimalist configuration, with just basic primitives, was very constructive for understanding this, because it forced me to venture into constructing my own means for expressing what I want to model, and to understand what is involved in doing that. I’ve been using a version of Lisp for this purpose, and it’s been a very rewarding experience, but other students may have other preferences. I don’t mean to prejudice anyone toward one programming model. I recommend finding, or creating one that fits the kind of modeling you are pursuing.

Along with this perspective, I think, there must come an understanding of what different programming languages are really doing for you, in their fundamental architecture. What are the fundamental types in the language, and for what are they best suited? What are the constructs and facilities in the language, and what do they facilitate, in real terms? Sure, you can use them in all sorts of different ways, but to what do they lend themselves most easily? What are the fundamental functions in the runtime architecture that are experienced in the use of different features in the language? And finally, how are these things useful to you in creating the model you want to attempt? Seeing programming languages in this way provides a whole new perspective on them that has you evaluating how they facilitate your modeling practice.

Application comes along eventually, but it comes late in the process. The real point is to create the supporting architecture, and any necessary operating systems first. That’s the “hypothesis” in the science you are pursuing. Applications are tests of that hypothesis, to see if what is predicted by its creators actually bears out. The point is to develop hypotheses of computing systems so that their predictive limits can be ascertained, if they are not falsified. People will ask, “Where does practical application come in?” For the computer scientist involved in this process, it doesn’t, really. Engineers can plumb the knowledge base that’s generated through this process to develop ideas for better system software, software development systems, and application environments to deploy. However, computer science should not be in service to that goal. It can merely be useful toward it, if engineers see that it is fit to do so. The point is to explore, test, discuss, criticize, and strive to know.

These are just some preliminary thoughts on computer systems modeling. In my research, I’ve gotten indications that it’s not healthy for the discipline to be insular, just looking at its own artifacts for inspiration for new ideas. It needs to look at other fields of study to introduce unorthodox models into the discussion. I haven’t gotten into that yet. I’m still trying to understand some basic ideas of a computer system, using a more mathematical approach.

Related post: Does computer science have a future?

— Mark Miller, https://tekkie.wordpress.com

Read Full Post »