This has been on my mind for a while, since I had a brush with it. I’ve been using Jungle Disk cloud-based backup since about 2007, and I’ve been pretty satisfied with it. I had to take my laptop “into the shop” early this year, and I used another laptop while mine was being repaired. I had the thought of getting a few things I was working on from my cloud backup, so I tried setting up the Jungle Disk client. I was dismayed to learn that I couldn’t get access to my backed up files, because I didn’t have my Amazon S3 Access Key. I remember going through this before, and being able to recover my key from Amazon’s cloud service, after giving Amazon my sign-in credentials. I couldn’t find the option for it this time. After doing some research online, I found out they stopped offering that. So, if you don’t have your access key, and you want your data back, you are SOL. You can’t get any of it back. Period, even if you give Amazon’s cloud services your correct sign-on credentials. I also read stories from very disappointed customers who had a computer crash, didn’t have their key, and had no means to recover their data from the backup they had been paying for for years. This is an issue that Jungle Disk should’ve notified customers about a while ago. It’s disconcerting that they haven’t done so, but since I know about it now, and my situation was not catastrophic, it’s not enough to make me want to stop using it. The price can’t be beat, but this is a case where you get what you pay for as well.

My advice: Write down–on a piece of paper, or in some digital note that you know is secure and recoverable if something bad happens to your device–your Amazon S3 Access Key. Do. It. NOW! You can find it by bringing up Jungle Disk’s Activity Monitor, and then going to its Desktop Configuration screen. Look under Application Settings, and then Jungle Disk Account information. It’s a 20-character, alphanumeric code. Once you have it, you’re good to go.

I’m really late with this, because it came out in late May, when I was super busy with a trip I was planning. I totally missed the announcement until I happened upon it recently. In past posts I had made a mention or two about Disney working on a sequel to Tron Legacy. Well, they announced that it isn’t happening. It’s been cancelled. The official announcement said that the movie release schedule in 2017 was just too full of other live-action films Disney is planning. “RaginRonin” on YouTube gave what I think is the best synopsis of movie industry pundit analysis. It sounds like it comes down to one thing: Disney is averse to live-action films that don’t relate to two genres that have been successful for them: Pirates of the Caribbean, and anything related to its classic fairy tale franchise. Other than that, they want to focus on their core competency, which is animation.

All pundit analysis focused on one movie: Tomorrowland. It was a Disney live-action sci-fi movie that flopped. They figured Disney took one look at that and said, “We can’t do another one of those.”

One thing contradicts this, though, and I’m a bit surprised no one I’ve heard so far picked up on this: Disney is coming out with a live-action sci-fi film soon. It’s called Star Wars: The Force Awakens… though it is being done through its Lucasfilm division, and it’s their staff doing it, not Disney’s. Maybe they think that will make a difference in their live-action fortunes. Disney paid a lot of money for Lucasfilm, and so of course they want it to produce. They want another series. No, more than one series!

Like with the first Tron film in 1982, Legacy did well at the box office, but not well enough to wow Disney. Apparently they were expecting it to be a billion-dollar film in ticket sales domestically and internationally, and it grossed $400 million instead. Secondly, Tron: Uprising, the animated TV series that was produced for Disney XD, and got some critical acclaim, did not do well enough for Disney’s taste, and was cancelled after one season. Though, I think the criticism that, “Of course it didn’t do well, since they put it on an HD channel when most viewers don’t have HD,” is valid, it also should be said that it wasn’t a “killer app” that drew people to HD, either. Maybe it’s more accurate to say that Tron as a genre is not a hot seller for Disney, period. It’s profitable, but it doesn’t knock their socks off.

One pundit said that he’s confident Disney will return to Tron sometime in the future, just as it did with Legacy, but the way things are looking now, Disney wants to focus on its profitable properties. I can buy that, but I wonder if the challenge was the story. Olivia Wilde, the actress who played “Quorra” in Legacy, mentioned this in an April interview. Shooting for the sequel, in which the original cast was slated, was scheduled for October, yet they didn’t have a screenplay. They had plenty of time to come up with one. Disney hired a writer for this sequel a couple years ago.

This has happened before. As I’ve talked about in previous posts, there was an attempt to make a Tron sequel back in 2003. It was supposed to be a combination release of a video game and a movie, called Tron 2.0. The video game came out for PCs, and later, game consoles. There was a clear, dramatic storyline in the game that jumped off from the characters, and a bit of the story, from the original Tron. The whole aesthetic of the video game was very nostalgic. A lot of the focus was on the subject of computer viruses, and various forms of malware, and some pretty interesting story lines about the main characters. I had to admit, though, that it took the focus off of what was really interesting about Tron, which was the philosophical and political arguments it made about what computing’s role should be in society. Steven Lisberger, who was driving the effort at Disney at the time, said that an idea he had was to talk about (I’m paraphrasing), “What is this thing called the internet? What should it represent?” He said, “It’s got to be something more than a glorified phone system!” Disney had developed some concept art for the movie. It looked like it might have a chance, but it was cancelled. Tron Legacy, which came out in 2010, was a worthy successor to the first movie in this regard, and I think that had something to do with it getting the green light. Someone had finally come up with something profound to say about computing’s role in society (I talk about this here). I think there’s more to this story than the market for live-action sci-fi movies from Disney. I think they haven’t found something for a sequel to communicate, and they were running up against their production deadline. I suspect that Lisberger and Kosinski did not want to rush out something that was unworthy of the title. Canceling it will give more time for someone down the road to do it right.

Several years ago, while I was taking in as much of Alan Kay’s philosophy as I could, I remember him saying that he wanted to see science be integrated into education. He felt it necessary to clarify this, that he didn’t mean teaching everything the way people think of science–as experimental proofs of what’s true–but rather science in the sense of its root word, scientia, meaning “to know.” In other words, make practices of science the central operating principle of how students of all ages learn, with the objective of learning how to know, and situating knowledge within models of epistemology (how one knows what they know). Back when I heard this, I didn’t have a good sense of what he meant, but I think I have a better sense now.

Kay has characterized the concept of knowledge that is taught in education, as we typically know it, as “memory.” Students are expected to take in facts and concepts which are delivered to them, and then their retention is tested. This is carried out in history and science curricula. In arithmetic, they are taught to remember methods of applying operations to compute numbers. In mathematics they are taught to memorize or understand rules for symbol manipulation, and then are asked to apply the rules properly. In rare instances, they’re tasked with understanding concepts, not just applying rules.

Edit 9/16/2015: I updated the paragraph below to flesh out some ideas, so as to minimize misunderstanding.

What I realized recently is missing from this construction of education are the ideas of being skeptical and/or critical of one’s own knowledge, of venturing into the unknown, and trying to make something known out of it that is based on analysis of evidence, with the goal of achieving greater accuracy to what’s really there. Secondly, it also misses on creating a practice of improving on notions of what is known, through practices of investigation and inquiry. These are qualities of science, but they’re not only applicable to what we think of as the sciences, but also to what we think of as non-scientific subjects. They apply to history, mathematics, and the arts, to name just a few. Instead, the focus is on transmitting what’s deemed to be known. There is scant practice in venturing into the unknown, or in improving on what’s known. After all, who made what is known, as far as a curriculum is concerned, but other people who may or may not have done the job of analyzing what is known very well. This isn’t to say that students shouldn’t be exposed to notions of what is known, but I think they ought to also be taught to question it, be given the means and opportunity to experience what it’s like to try to improve on its accuracy, and realize its significance to other questions and issues. Furthermore, that effort on the part of the student must be open to scrutiny and rational, principled criticism by teachers, and fellow students. I think it’d even be good to have professionals in the field brought into the process to do the same, once students reach some maturity. Knowledge comes through not just the effort to improve, but arguments pro and con on that effort.

A second ingredient Kay has talked about in recent years is the need for outlooks. He said in a presentation at Kyoto University in 2009:

What outlook does is give you a stronger way of looking at things, by changing your point of view. And that point of view informs every part of you. It tells you what kind of knowledge to get. And it also makes you appear to be much smarter.

Knowledge is ‘silver,’ but outlook is ‘gold.’ I dare say [most] universities and most graduate schools attempt to teach knowledge rather than outlook. And yet we live in a world that has been changing out from under us. And it’s outlook that we need to deal with that.

He has called outlooks “brainlets,” which have been developed over time for getting around our misperceptions, so we can see more clearly. One such outlook is science. A couple others are logic, and mathematics. And there are more.

The education system we have has some generality to it, but as a society we have put it to a very utilitarian task, and as I think is accurately reflected in the quote from Kay, we rob ourselves of the ability to gain important insights on our work, our worth, and our world by doing this. The sense I get about this perspective is that as a society, we use education better when we use it to develop how to think and perceive, not to develop utilitarian skills that apply in an A-to-B fashion to some future employment. This isn’t to say that skills used, and needed in real-world jobs are unimportant. Quite the contrary, but really, academic school is no substitute for learning job skills on the job. They try in some ways to substitute for it, but I have not seen one that has succeeded.

What I’ll call “skills of mind” are different from “skills of work.” Both are important, and I have little doubt that the former can be useful to some employers, but the point is it’s useful to people as members of society, because outlooks can help people understand the industry they work in, the economy, society, and world they live in better than they can without them. I know, because I have experienced the contrast in perception between those who use powerful outlooks to understand societal issues, and those who don’t, who fumble into mishaps, never understanding why, always blaming outside forces for it. What pains me is that I know we are capable of great things, but in order to achieve them, we cannot just apply what seems like common sense to every issue we face. That results in sound and fury, signifying nothing. To achieve great things, we must be able to see better than the skills with which we were born and raised can afford us.

This question made me conscious of the fact that the icons computer/smartphone and many web interfaces use are a metaphor for the way in which the industry has designed computers for a consumer market. That is, they are to be used to digitize and simulate old media.

For example, the use of the now-obsolete floppy disk to represent “save?”

Which computer icons no longer make sense to a modern user?

The way this question is asked is interesting and encouraging. These icons no longer make sense to modern users. Another interesting question is what should replace them? However, without powerful outlooks, I suspect it’s going to be difficult to come up with anything that really captures the power of this medium that is computing, and we’ll just use the default of ourselves as metaphors.

In my time on Quora.com, I’ve answered a bunch of questions on object-oriented programming (OOP). In giving my answers, I’ve tried my best to hew as closely as I can to what I’ve understood of Alan Kay’s older notion of OOP from Smalltalk. The theme of most of them is, “What you think is OOP is not OOP.” I did this partly because it’s what’s most familiar to me, and partly because writing about it helped me think clearer thoughts about this older notion. I hoped that other Quora readers would be jostled out of their sense of complacency about OO architecture, and it seems I succeeded in doing that with a few of them. I’ve had the thought recently that I should share some of my answers on this topic with readers on here, since they are more specific descriptions than what I’ve shared previously. I’ve linked to these answers below (they are titled as the questions to which I responded).

What is Alan Kay’s definition of object-oriented? (Quora member Andrea Ferro had a good answer to this as well.)

A big thing I realized while writing this answer is that Kay’s notion of OOP doesn’t really have to do with a specific programming language, or a programming “paradigm.” As I said in my answer, it’s a method of system organization. One can use an object-oriented methodology in setting up a server farm. It’s just that Kay has used the same idea in isolating and localizing code, and setting up a system of communication within an operating system.

Another idea that had been creeping into my brain as I answered questions on OOP is that his notion of interfaces was really an abstraction. Interfaces were the true types in his message passing notion of OOP, and interfaces can and should span classes. So, types were supposed to span classes as well! The image that came to mind is that interfaces can be thought to sit between communicating objects. Messages, in a sense, pass through them, to dispatch logic in the receiving object, which then determines what actual functionality is executed as a result. Even the concept of behavior is an abstraction, a step removed from classes, because the whole idea of interfaces is that you can change the class that carries out a behavior with a completely new implementation (supporting the same interface), and the expected behavior for it will be exactly the same. In one of my answers I said that objects in OOP “emulate” behavior. That word choice was deliberate.

This seemed to finally make more sense for me than it ever had before about why Kay said that OOP is about what goes on between objects, not what goes on with the objects themselves (which are just endpoints for messages). The abstraction is interstitial, in the messaging (which is passed between objects), and the interfaces.

This is a conceptual description. The way interfaces were implemented in Smalltalk were as collections of methods in classes. They were strictly a “gentleman’s agreement,” both in terms of the messages to which they matched, and their behavior. They did not have any sort of type identifiers, except for programmers recognizing a collection of method headers (and their concomitant behaviors) as an interface.

Are utility classes good OO design?

Object-Oriented Programming: In layman’s terms, what is the difference between abstraction, encapsulation and information hiding?

What is the philosophical genesis of object-oriented programming methodology?

Why are C# and Java not considered as Object Oriented Language compared to the original concept by Alan Kay?

The above answer also applies to C++.

What does Darwin’s theory of evolution bring to object oriented programming methodology?

This last answer gets to what I think are some really interesting thoughts that one can have about OOP. I posted a video in my answer from a presentation by philosopher Daniel Dennet on “Free Will Determinism and Evolution.” Dennet takes an interesting approach to philosophy. He seems almost like a scientist, but not quite. In this presentation, he uses Conway’s game of “life” to illustrate a point about free will in complex, very, very, very large scale systems. He proposes a theory that determinism is necessary for free will (not contrary to it), and that as systems survive, evolve, and grow, free will becomes more and more possible, whereby multiple systems that are given the same inputs will make different decisions (once they reach a structured complexity that makes it possible for them to make what could be called  “decisions”). I got the distinct impression after watching this that this idea has implications for where object-orientation can go in the direction of artificial intelligence. It’s hard for me to say what this would require of objects, though.

—Mark Miller, https://tekkie.wordpress.com

Peter Foster, a columnist for the National Post in Canada, wrote what I think is a very insightful piece on a question that’s bedeviled me for many years, in “Why Climate Change is a Moral Crusade in Search of a Scientific Theory.” I have never seen a piece of such quality published on Breitbart.com. My compliments to them. It has its problems, but there is some excellent stuff to chew on, if you can ignore the gristle. The only ways I think I would have tried to improve on what he wrote is, one, to go deeper into the identification of the philosophies that are used to “justify the elephantine motivations.” As it is, Foster uses readily identifiable political labels, “liberal,” “left,” etc. as identifiers. This will please some, and piss off others. I really admired Allan Bloom’s efforts to get beyond these labels to understanding and identifying the philosophies, and the mistakes that he perceives were made with “translating” them into an American belief system that were at the root of his complaints, the philosophers who came up with them, and the consequences that have been realized through their adherents. There’s much to explore there.

He also “trips” over an analogy that doesn’t really apply to his argument (though he thinks it does) in his reference to supposed motivations for thinking Earth was the center of the Universe, though aspects of the stubbornness of pre-Copernican thinking on it, to which he also refers, apply.

Foster says a lot in this piece, and it’s difficult to fully grasp it without taking some time to think about what he’s saying. I will try to make your thinking a little easier.

He begins with trying to understand the reasons for, and the motivational consequences of, economic illiteracy in our society. He uses a notion of evolutionary psychology (perhaps from David Henderson, to whom he refers), that our brains have been in part evolutionally influenced by hundreds of thousands of years (perhaps millions. Who knows how far back it goes) of tribal society, that our natural perceptions of human relations, regarding power and wealth, and what is owed as a consequence of social status, are influenced by our evolutionary past.

Here is a brief video from Reason TV on the field of evolutionary psychology, just to get some background.

Foster says that our modern political and economic system, which frustrates tribalism, has only been a brief blink of an eye in that evolutionary experience, by comparison. So we still carry that evolutionary heritage, and it emerges naturally in our perceptions of our modern systems. He says this argument is controversial, but he justifies using it by saying that there is an apparent pattern to the consequences of economic illiteracy. He notices a certain consistency in the arguments that are used to morally challenge our modern systems, which does not seem to be dependent on how skilled people are in their niche areas of knowledge, or unskilled in many areas of knowledge. It’s important to keep what he says about this in mind throughout the article, even though he goes on at length into other areas of research, because it ties in in an important way, though he does not refer back to it.

He doesn’t say this, but I will. At this point, I think that the only counter to the natural tendencies we have (regardless of whether Foster describes them accurately or not) is an education in the full panoply in the outlooks that formed our modern society, and an understanding of how they have advanced since their formation. In our recent economy, there’s a tendency to think about narrowing the scope of education towards specialties, since each field is so complex, and takes a long time to master, but that will not do. If people don’t get the opportunity to explore, or at least experience, these powerful ways of understanding the world, then the natural tendency towards a “low-pass filter” will dominate what one sees, and our society will have difficulty advancing, and may regress. A key phrase Foster uses is, “Believing is seeing.” We think we see with our eyes, but we actually see with our beliefs (or, in scientific terms, our mental models). So it is crucial to be conscious of our beliefs, and be capable of examining and questioning them. Philosophy plays a part in “exercising” and developing this ability, but I think this really gets into the motivation to understand the scientific outlook, because this is truly what it’s about.

A second significant area Foster explores is a notion of moral psychology from Jonathan Haidt, who talks about “subconscious elephants,” which are “driving us,” unseen. We justify our actions using moral language, giving the illusion that we understand our motivations, but we really don’t. Our pronouncements are more like PR statements, convincing others that our motivations are good, and should be allowed to move forward, unrestricted. However, without examining and understanding our motivations, and their consequences, we can’t really know whether they are good or not. Understanding this, we should be cautious about giving anyone too much power–power to use money, and power to use force–especially when they appeal to our moral sensibility to give it to them.

Central to Foster’s argument is that “climate change” is a moral crusade, a moral argument–not a scientific one, that uses the authority that our society gives to science to push aside skepticism and caution regarding the actions that are taken in its name, and the technical premises that motivate them. Foster excuses the people who promote the hypothesis of anthropogenic global warming (AGW) as fact, saying they are not frauds who are consciously deceiving the public. They are riding pernicious, very large, “elephants,” and they are not conscious of what is driving them. They are convinced of their own moral rightness, and they are honest, at least, in that belief. That should not, however, excuse their demands for more and more power.

I do not mean what I say here to be a summary of Foster’s article. I encourage you to read it. I only mean to make the complexity of what he said a bit more understandable.

Related posts: Foster’s argument has made me re-examine a bit what I said in the section on Carl Sagan in “The dangerous brew of politics, religion, technology, and the good name of science.”

I’m taking a flying leap with this, but I have a suspicion my post called “What a waste it is to lose one’s mind,” exploring what Ayn Rand said in her novel, “Atlas Shrugged,” perhaps gets into describing Haidt’s “motivational elephants.”

Various computer historians, including Charles Babbage’s son, built pieces of Babbage’s Analytical Engine, but none built the whole thing. Doron Swade and colleagues are planning on finally building the whole thing for the London Science Museum! They have a blog for their project, called Plan 28, where you can read about their progress.

Here is John Graham-Cumming in a TEDx presentation from 2012 talking about some of the history of the Analytical Engine project that Babbage and Ada Lovelace engaged in, in the mid-1800’s, and the announcement that Cumming and others are going to build it. Very exciting! I got the opportunity to see the 2nd replica that Swade and his team built of Babbage’s Difference Engine #2 at the Computer History Museum in Mountain View, CA, back in 2009. I’d love to see this thing go!

Related post: Realizing Babbage

I’ve been spending quite a bit of time on Quora lately. This is one of the better questions I’ve seen on there in the last 7 months.

Bugs represent defects in a program, though in some cases the unintentional consequences are users might have a use case for such flaws the developer originally didn’t think of.

Some bugs graduate from being defects to growing into official fully supported features.

What are the best examples of this in software history?

What are the best examples of software bugs that became features (a.k.a. misbugs)?

Back in 2009 I wrote a post called “Getting an education in America.” I went through a litany of facts which indicated we as a society were missing the point of education, and wasting a lot of time and money on useless activity. I made reference to a segment with John Stossel, back when he was still a host on ABC’s 20/20, talking about the obsession we have that “everyone must go to college.” One of the people Stossel interviewed was Marty Nemko, who made a few points:

  • The bachelor’s degree is the most overvalued product in America today.
  • The idea marketed by universities that you will earn a million dollars more over a lifetime with a bachelor’s than with a high school diploma is grossly misleading.
  • The “million dollar” figure is based solely on accurate stats of ambitious high achievers, who just so happened to have gone to college, but didn’t require the education to be successful. It’s misattributed to their education, and it’s unethical that universities continue to use it in their sales pitch, saying, “It doesn’t matter what you major in.”

It turns out Nemko has had his own channel on YouTube. I happened to find a video of his from 2011 that really fleshes out the points he made in the 20/20 segment. What he says sounds very practical to me, and I encourage high school students, and their parents, to watch it before proceeding with a decision to go to college.

Nemko talks about what college really is these days: a business. He talks about how this idea that “everyone must go to college” has created a self-defeating proposition: Now that so many more people, in proportion to the general population, are getting bachelors’ degrees, getting one is not a distinction anymore. It doesn’t set you apart as someone who is uniquely skilled. He advises now that if you want the distinction that used to come from a bachelor’s, you should get a master’s degree. He talks about the economics of universities, and where undergraduates fit into their cost structure. This is valuable information to know, since students are going to have to deal with these realities if they go to college.

It’s not an issue of rejecting college, but of assessing whether it’s really worth it for you. He also outlines some other possibilities that could serve you a lot better, if what motivates you is not well suited to a 4-year program.

Nemko lays out his credentials. He’s gotten a few university degrees himself, and he’s worked at high levels within universities. He’s not just some gadfly who badmouths them. I think he knows of what he speaks. Take a listen.

The University of Colorado at Boulder started up a visiting scholar program for conservative thought last year. I had my doubts about it. I don’t like the idea of “affirmative action for certain ideologies.” One would think that if a university’s mission was to educate they wouldn’t care what one’s political leanings were. That’s a private matter. I would think a university, fully cognizant of its role in society, would look for people who are not only highly qualified, and show a dedication to academic work, but also seek a philosophical balance, not ideological. However, it has been noted many times how politically slanted university faculty is, at least in their political party registration. Looking at the stats, one would think that the institutions have in fact become bastions for one political party or another, and listening to the accounts from some scholars and students, you’d think that the arts & humanities colleges have become training grounds for political agitators and propagandists. I don’t find that encouraging. The fact that for many years universities have not used this apparent tilt toward ideological purity as an opportunity for introspection about what they are actually teaching, but seem to rather take it as a mark pride, is also troubling. All of the excuses I’ve heard over the years sound like prejudices against classical thought. I’d like to ask them, “Can you come up with anything qualitatively better” (if they’ve even thought about that), but I’m afraid I will be disappointed by the answer while they high-five each other.

Having actually witnessed a bit of the conservative thought program at CU (seeing a couple of the guest speakers), I’m pleased with it. It has an academically conservative slant, and, from what I’ve seen, avoids the “sales pitch” for itself. Instead, it argues from a philosophical perspective that is identified as conservative by society. The most refreshing thing is it’s open to dialogue.

The first professor in the program, Dr. Steven Hayward, wrote a couple excellent speeches I read on political discourse.

I thought I would highlight the profile that was written for the next professor in the program, Dr. Bradley Pirzer. He appears to be a man after my own heart on these matters. I’m looking forward to what he will present.

How would you characterize the state of political discourse in the United States today?

Terrible. Absolutely terrible. But, I must admit, I write this as a 46-year old jaded romantic who once would have given much of his life to one of the two major political parties.

Political discourse as of 2014 comes down to two things 1) loudness and 2) meaningless nothings. Oration is a dead art, and the news from CNN, Fox and other outlets is just superficial talking points with some anger and show. Radio is just as bad, if not worse. As one noted journalist, Virginia Postrel, has argued, we probably shouldn’t take anything that someone such as Ann Coulter says with any real concern, as she is “a performance artist/comedian, not a serious commentator.”

Two examples, I think, help illustrate this. Look at any speech delivered by almost any prominent American from 1774 to 1870 or so. The speeches are rhetorically complicated, the vocabulary immense, and the expectations of a well-informed audience high. To compare the speech of a 1830s member of Congress with one—perhaps even the best—in 2014 is simply gut-wrenchingly embarrassing.

Another example. The authors of the Constitution expected us to discuss the most serious matters with the utmost gravity. Nothing should possess more gravitas in a republic than the issue of war. Yet, as Americans, we have not engaged in a properly constitutional debate on the meaning of war since the close of World War II. We’ve seen massive protests, some fine songs, and a lot of bumper stickers, but no meaningful dialogue.

As a humanist, I crave answers for this, and I desire a return to true—not ideological—debate and conversation. Academia has much to offer the larger political world in this.

How do you view the value of higher education today, particularly given its rising cost and rising student-loan burden?

I’m rather a devoted patriot of and for liberal education. From Socrates forward, the goal of a liberal education has been to “liberate” the human person from the everyday details of this world and the tyranny of the moment. Our citizenship, as liberally educated persons, belongs to the eternal Cosmopolis, not to D.C. or London or. . . .

But, in our own titillation with what we can create, we often forget what came before and what will need to be passed on in terms of ethics and wisdom. The best lawyer, the best engineer, the best chemist, will be a better person for knowing the great ideas of the past: the ethics of Socrates; the sacrifice of Perpetua; and the genius of Augustine.


Get every new post delivered to your Inbox.