Tuesday, July 23, 2024

A Catholic Examination of Hindu Teachings

Interfaith dialogues between Hinduism and Catholicism offer valuable opportunities for understanding and appreciating the richness of both traditions. Hinduism, with its extensive array of sacred texts and philosophical concepts, provides a deep exploration of divinity, karma, and the cycle of life. Catholicism, grounded in the teachings of the Bible and the Catechism of the Catholic Church, emphasizes the unique and irreplaceable role of Jesus Christ in the salvation of humanity. While these traditions differ in many aspects, engaging with each tradition respectfully allows for a nuanced appreciation of their respective teachings and practices, while recognizing the fullness of divine truth in Catholicism.

Inclusivity and Exclusivity

Hinduism is diverse, but in America, the most popular branch, ISKCON, is monotheistic, acknowledging one supreme being (Kṛṣṇa) who is worshiped with exclusive devotion (Bhagavad Gita 9:22). Other gods are portrayed as manifestations of Kṛṣṇa (Bhagavad Gita 9:23). Catholicism, on the other hand, teaches that the fullness of divine revelation and the path to salvation is found in Jesus Christ alone (CCC 846-848) and holds that other gods are false gods or demonic entities (Psalm 96:5; 1 Corinthians 10:20; CCC 2112). For true salvation, one must accept that the name of Jesus Christ of Nazareth is above every other name.

While the Catechism of the Catholic Church teaches that other religions may contain elements of truth and goodness, it also teaches that the fullness of truth resides in the Catholic faith (CCC 819, 839). Prominent theologians such as St. Augustine and St. Thomas Aquinas have affirmed the exclusivity of Christ’s redemptive role and argued against other religious expressions. The Catholic Church’s position reflects a commitment to sharing the message of Christ’s salvation while recognizing the cultural entrenchment of Hinduism.

Reincarnation and Karma

Hinduism’s concepts of reincarnation and karma offer a complex understanding of the soul's journey through multiple bodies (Bhagavad Gita 2:13). In Hindu belief, all souls are born from and return to the Supersoul, or Brahman (Bhagavad Gita 13:13). Those who understand Kṛṣṇa do not become entangled with karma, or material activities (Bhagavad Gita 4:14), and go to Kṛṣṇa's supreme abode, never to return to Earth (Bhagavad Gita 8:21), while those who do not understand remain trapped in the cycle of birth and death (Bhagavad Gita 2:51). However, Catholicism firmly rejects reincarnation (CCC 1013). At the moment of death, each person is definitively judged and obtains either entrance into heaven or damnation (CCC 1022).

The distinction between Hinduism and Catholicism lies in the purification process of the soul. Catholic teachings on the lack of reincarnation reflect a belief in the finality of judgment and eternal communion with God, while Hindu beliefs offer little guidance on extricating oneself from the interconnectedness of actions and their consequences. While Hinduism provides a complex narrative, the Catholic perspective offers a clear and definitive understanding of the afterlife based on a personal and unique judgment by God.

Grace and Salvation

Hinduism acknowledges one path to spiritual fulfillment through surrender to Kṛṣṇa (Bhagavad Gita 15:4, 18:66). Catholicism teaches that salvation is a process that begins with God’s grace and continues with the Holy Spirit guiding and empowering individuals (CCC 1996-2005). Salvation in Catholicism is achieved through faith in Jesus Christ, participation in the sacraments, and living a life of grace and obedience to God's commandments. Prominent Catholic theologians such as Karl Rahner have elaborated on the nature of grace as a universal and active force in the salvation process, emphasizing that the fullness of divine grace is realized in Christ alone.

Scriptures and Revelation

Hinduism’s sacred texts, such as the Vedas, Upanishads, and Bhagavad Gita, offer a diverse spiritual literature reflecting the broad yet confusing spectrum of Hindu beliefs and practices. However, the Church’s Magisterium maintains that its own tradition, including the Bible and the Catechism of the Catholic Church, is a complete, unified, and authoritative source of the divine revelation necessary for salvation (CCC 74-84). While the Church acknowledges the depth of Hindu texts and engages respectfully with their insights, it upholds the Bible and Church tradition as the definitive sources of divine revelation.

Rigidity of Teachings

Both Hinduism and Catholicism have practices aimed at spiritual growth and discipline. Hindu practices, such as dietary restrictions, are deeply embedded in its religious and cultural traditions (Manusmriti 5:51). Catholicism also has disciplines, such as abstaining from meat during Lent, intended to foster spiritual growth and freedom of heart (CCC 2043), but they can be adapted to individual circumstances. The flexibility of Catholic disciplines reflects a pastoral approach to spiritual growth, considering individual needs and contexts. Catholic teachings emphasize that spiritual practices should promote growth and reflection while upholding essential teachings, respecting the personal nature of spiritual journeys rather than imposing rigid constraints.

Conclusion

The dialogue between Hinduism and Catholicism reveals the depth and richness of both traditions, each offering valuable insights into spirituality, morality, and salvation. Hinduism provides a complex and nuanced understanding of divinity, reincarnation, and karma. Catholicism, with its emphasis on the unique role of Jesus Christ, the resurrection, and the active grace of God, offers a distinctive perspective on salvation and spiritual renewal. By approaching these differences with respect and empathy, we can appreciate the unique contributions of each tradition while firmly recognizing the Catholic perspective on the fullness of divine revelation and the path to salvation.

Saturday, November 11, 2023

Exception handling claims and counterpoints

Unsupported claim: The distinction between (unrecoverable) bugs/panics and recoverable errors is incredibly useful
Counterpoint: Most languages make no such distinction. The distinction is subjective and depends on the context - one person's "unrecoverable error" is another person's "must recover". For example, say we have a service like Repl.it - user code running in threads running on a server. For expressiveness, the user code should be able to fail by throwing an exception. But that shouldn't take down the entire thread (unless it needs to), and it should never take down the entire server (running thousands of different users' threads). Most exception systems here would only give a false of safety: they allow some construct for recoverability (like try-catch or the ? operator), but then they allow exit() or some kinds of errors (OOM, divide by zero, ...) to take down the entire system. This is unacceptable. For example, H2O handled OOM in Java and generally recovered nicely - if OOM was unrecoverable, this couldn't be done. As soon as you allow any sort of unrecoverable bugs/panics, then one unvetted library or untrusted user code snippet can take down the entire system. The conclusion: unrecoverable panics shouldn't exist. A true exception system needs to actually take recoverability seriously and allow every single error in the entire software stack to be recoverable at some level of containment.

Unsupported claim: Rust users over-estimate how often they need code that never panics
Counterpoint: A lot of software is mission-critical and absolutely has to keep going no matter what, like the server example mentioned previously. A server that constantly crashes is a useless server compared to one that logs the error and keeps going. In contrast, there is never a time when a Rust user said "Gee, my program kept running. I really wish it had crashed with a panic instead of catching the exception and printing a stack trace" - such a user would just add an exit() to their exception handler.

Unsupported claim: Every error in a high-assurance system (think pacemakers/medical devices, critical automotive systems like break control, flight control engine, nuclear device release control software) has to be a hard compile-time error. You absolutely should not be able make "mistakes" here.
Counterpoint: It is very useful during development to be able to see all the errors in a program at once. If a compiler stops on the first error, this is impossible. Similarly, some errors in programs are only visible by actually running them. If programs under development cannot be run due to not passing one or more quality checks, this means a developer will have to waste time fixing compiler errors in code that may end up being deleted or completely rewritten anyway. If there is a valid semantics to the program, the compiler should compile the program with this semantics and not get in the developer's way. Certainly for the final production-ready binary, it is worth addressing all the quality checks, but not necessarily before then. It is unwise to encode quality control processes at the technical level when organizational policies such as commit reviews, continuous integration testing (which will include quality checks), and QA teams are so much more effective. Similarly to how one person's "must-recover" exception may be another person's "unrecoverable error", one person's "hard compile-time error" may be another person's "useless noise", and policies on such errors can only be dictated by the organization.

Unsupported claim: Most projects don't configure their compiler warning flags properly. Warnings are not enough for mission-critical--has to be hard compile-time error.
Counterpoint: Per https://devblogs.microsoft.com/cppblog/broken-warnings-theory/, in Microsoft's Visual Studio Diagnostics Improvements Survey, 15% of 270 respondents indicated they build their code with /Wall /WX indicating they have a zero tolerance for any warnings. Another 12% indicated they build with /Wall. Another 30% build with /W4. These were disjoint groups that altogether make 57% of users that have stricter requirements to code than the default of Visual Studio IDE (/W3). Thus, there is a majority of users that definitely configures their warning flags, and likely there are more that simply left the flags at the default after determining that the default satisfied their needs. If this is not the case, it is easy enough to make it mandatory for at least one compiler warning level flag to be specified. The NASA/JPL rules specify to enable all warnings.

Unsupported claim: Guaranteeing all possibilities are handled is not something a general purpose language can handle. We'd have some sort of type system capable of expressing everything. This in turn would require some form of advanced dependent type checking, but such a type system would likely be far too complex to use to be practical. You'll only get 5 users, 4 of which work on proof assistants at INRIA.
Counterpoint: Ada is/was a general-purpose language, and Spark is a conservative extension of Ada that adds high-assurance capabilities without changing the underlying language. Many practical projects have been implemented with Ada and with Ada/Spark. And adding a basic analysis that determines what exceptions a function can throw and whether these are all handled is easy (it is just Java's checked exception system).

Unsupported claim: "Yorick's shitty law of error handling": Mixing guaranteed handling and abort-on-error into the same language is likely to result in users picking the wrong approach every time, and a generally confusing mess. You can't make an error handling mechanism that fits 100% of the use cases, rather you can either target the first 90%, or the remaining 10%, with the latter requiring an increasingly difficult approach the more of that chunk you want to cover
Counterpoint: This is exactly what Java did with its checked and unchecked exceptions, and nobody was confused. Annoyed, yes (at the dichotomy), but not confused. And in my approach there is actually not a checked/unchecked distinction - it is only the function signature that controls whether an exception is checked, so it is even less likely to result in confusion or annoyance. Don't want to bother with checked exceptions? Don't mention them in the function signature.

Unsupported claim: From the point of view of the code that says throw, everything is a panic.
Counterpoint: There are actually several ways to handle an exception (traditional exception, assertion failure, sys.exit call, etc.):

  • warn: compile time error/warning
  • error: crash the program (exit, infinite loop, throw exception)
  • lint: log failure and keep going (banned at Bloomberg)
  • allow: (in release mode) keep going as though the throw call never existed

Supported claim: Java checked exceptions have issues https://testing.googleblog.com/2009/09/checked-exceptions-i-love-you-but-you.html
Counterpoint: The issues are (1) long throws clauses (2) exception-swallowing traps that rethrow as an unchecked exception (3) unreachable exceptions. Exception sets solve the issues of long throws clauses - Java was trying to do that with the inheritance hierarchy but I think being able to have overlapping, customizable sets of exceptions will make a huge difference in usability. Allowing type signatures to skip over exceptions (similar to unchecked exceptions) avoids the need for exception swallowing. For unreachable exceptions there is a specific pragma for the compiler that an exception is unreachable and to not emit any warnings about it.

Saturday, August 22, 2020

The perfect time-frequency transform

 A while back I took a music class. We had to do a final project, for which I researched time-frequency transforms. These take a sound signal in the time domain and produce a 2D intensity graph (spectrogram) of intensity at each time/frequency pair. For example, there is the short-time Fourier transform (STFT) that takes slices of the signal, runs the Fourier transform over each slice to get a frequency pattern, and then averages them together.

The problem with the STFT is its resolution; it requires picking a window size for the sliding slices and this window restricts the frequency information that may be obtained. The right way to handle sampling is to construct an ideal signal using the Whittaker–Shannon interpolation formula, transform that to an ideal time-frequency spectrum, and then compute an average/integral of the area of the spectrum corresponding to each pixel.

So, handling sampling is easy and just requires summing a double integral of the transform of a sinc function a number of times. But what is the actual transform? Reviewing the scientific literature I found many different transforms; the two most interesting were minimal-cross entropy (MCE-TFD) and the tomography time-frequency transform (TTFT). The MCE method tries to minimize the entropy of the spectrum, using an iterative process. The TTFT uses a fractional Fourier transform to get intensities along each frequency/time angle, and then uses the inverse Radon transform to turn these sections into a 2D spectrum.

The TTFT perfectly captures linear chirps; a linear chirp \(\sin((a+ b t) t)\) creates a line on the time-frequency spectrum. But when two or more chirps are present the TTFT shows interference between them. This is due to a quadratic cross-term. The MCE minimizes entropy, not the cross-term, so it too has interference, although less of it. So the question is, can we get rid of the interference? This amounts to the transform being linear; the transform of a weighted sum is the weighted sum of the spectrums.

We also want "locality" or clamping; a waveform that is 0 everywhere but a small time slice should have a spectrum that is 0 everywhere but in that slice. Similarly if the frequencies are limited to a band then the spectrum should also be limited to that band. Additionally we want time-shifting, so that the spectrum of \(f(x+k)\) is the spectrum of \(f\) shifted by \(k\).

So to review the properties:

  • Linearity: \(T(a f + b g) = a T(f)+b T(g) \)
  • Chirps: \(T(\sin((a+b t) t))(t,\omega) = \delta ( \omega - (a+b t) ) \) where \(\delta\) is the Dirac delta
  • Locality: \(T(H (k t) f(t))(t,\omega) = H ( k t) T(f(t))(t,\omega) \) where \(H\) is the step function
  • Bandlimiting: if for all \( \omega \in [a,b] \), \( \hat f (\omega) = 0 \), then \( T (f) (t,\omega) = 0 \) for all \( \omega \in [a,b] \)
  • Shifting: \(T(f(t))(t,\omega) = T(f(t+k)(t+k,\omega)\)

 The question now is whether such a transform exists. It seems like even these conditions are not sufficient to specify the result, because writing the function as a sum of chirps isn't enough to get a converging graph.

Monday, January 13, 2020

Stroscot devlog 4

The parser’s kind of working, besides features like EBNF, scannerless parsing, etc. that will be useful later but aren’t strictly necessary. So now I am writing the middle parts for the Stroscot programming language.

I started out by wanting to implement universes with dependent types, in particular experimenting with the New Foundations idea. I also wanted to add subtyping a la Dolan. Then I remembered insane and that it had to suspend most of the “check” calls in order to check recursive types successfully, so I needed an evaluation model. I wondered if type inference was any more complicated, so I looked at Roy and saw the unify function and concluded that implementing type inference was going to involve adding type variables and type solving (unification) to an existing type checker, but wouldn’t change much about the type checker structure, so starting with a dependent type checker would be best.

Then I looked for dependent type checker tutorials and found this; it starts out by presenting an evaluator for the untyped lambda calculus and then extends it gradually. LambdaPi takes a similar approach. They both use environments and closures, but with optimal reduction there are duplicators rather than closures and the environments are stored in the graph, so the implementation is quite different.

So I figured I’d start with to the optimal lambda calculus implementation in macro lambda calculus. But the first thing it does is a pseudo-alpha-renaming pass that identifies all the free variables and replaces them with wires. The code is complicated quite a bit by the separation into a separate inet-lib library which takes a custom-syntax interaction net description and uses JavaScript’s eval features to implement the interaction rules. I went and read “Optimality and inefficiency” and they show poor behavior on C_n = \x -> (2 (2 (2 (2 x)))) where 2 = \s z -> s(s z), when applied to \x y -> x y and \x y -> y x. I still like optimality though; the issues they present are unrelated to interaction nets and can be worked around with the garbage collection in Asperti/BOHM.

I looked for more implementations of LambdaScope and it seems that Jan Rochel’s graph-rewriting-lambdascope package (Haskell) is the only one. There is MaiaVictor’s optlam / absal (JS) and Vizaxo’s optal (Haskell) but they use Lamping’s algorithm which has brackets/croissants and more overhead. They’re still worth looking at for ideas on how to implement interaction nets though.

I was thinking that perhaps Kernel/Fexprs had an interesting solution to variable lookup, as the environment is passed explicitly, so I was reading through the thesis and got lost. I tried reading the paper and it also seems very long and full of Scheme-isms with not much on lambdas. The primitives are wrap and vau, somewhere in there we can insert optimal evaluation but I didn’t get to environments yet.


Thursday, January 9, 2020

Stroscot Devlog 3

So I've been reading a lot about Earley parsers. They're divided into predictor completer and scanner sections. There's some “interesting observations” in the “Directly Executable Earley Parsing” paper: mainly, that additions are only ever made to the current and next Earley sets. But we still have to store the Earley sets; on the other hand, I noticed that we only have to look at them indexed by production. So we can index that by the next non-terminal and then look back to retrieve them. But it seems clear that the sets are mostly for notational, pedagogical, and debugging purposes, and the algorithm only needs the lists of suspended nonterminals pointed to by in-progress non terminals.

The other thing I looked at was the Earley-LR optimization, basically turning the grammar into an LR(0) automaton with shift/reduce ambiguity resolved by Earley sets, and generally speaking it seems like it can speed up the parser. Each Earley item is normally a dotted production rule together with a parent set, but you can replace the production rule with an LR(0) DFA state, which is a set of LR(0) items.

According to Jeffrey Kegler: “In creating Marpa, I may have acquired as much experience with the AH algorithm as anybody, possible even Aycock/Horspool themselves. I found the special LR(0) items they create to be counter-productive. AH's own implementation apparently had bugs due to duplication of Earley items. I solved these, but it made the code very complex and slower. Most of the compression that the AH method bought was in predictions, which could be compressed in other, less complicated, ways. Using AH items means that whenever something needs to be done in terms of the user's grammar, there has to be a translation. Since the AH item to Earley item relationship is many-to-many, this is not trivial. The cost of the more complex evaluation phase seems likely to eliminate any speed-up in the recognizer.

In “Practical Earley Parsing”, as a side issue, they describe a way of handling nulled symbols and productions via grammar rewrites. This proved very valuable and Marpa *does* continue to use that. It's a bit ironic that I ended up ignoring a lot of AH's paper, but that their little implementation trick has proved central to my efforts.” (edited, see here)

So in summary LR doesn’t add much to Earley parsing, but computing nullability and handling it specially does. It seems like they're optimizing their parser a lot, but they're optimising the wrong things. The memory layout and cache use is the main thing to pay attention to on modern processors and they don't seem to have a very interesting layout. And the radix tree for Earley items seems completely pointless. You have to iterate over all the pending Earley items anyway so you might as well just use a linked list.

The practical paper also leads to this rabbit hole of direct code execution / threaded execution / direct threaded code. It’s useful in parsers, but appears in any interpreter, e.g. Python. Python uses computed gotos and the basic idea is that instead of jumping back to a dispatch loop you include the code which looks up the dispatch table and jumps directly. Python uses the first class labels feature available in GNU C. But reading the “Directly Executable Earley Parsing” paper it just seems like a bunch of pointer stuff and it doesn't do the state splitting that they do in their later papers.

Another question is whether we can implement parser combinators on top of Earley, memoizing all the states somehow. It seems like it might be possible with enough recursion and memoization hacking but probably not worth it.

And I still don't know how to handle extended grammars, and I haven't looked at the Leo paper at all. But from some code skims of people who’ve implemented it Leo seems like it’s just a matter of keeping track of certain items. Extended Earley grammars generalize the dot position in the rule, so it’s an NFA (DFA?) instead of just a linear sequence. This of course should be easily accomplished with the help of some papers I have on extended rules.