Security Stories Part 3
I am back to share another security story from my archive. In addition to the pleasure of sharing my experiences, I write so that security may be taken more seriously by software organizations. It's all too easy to __dangerouslySetInnerHtml and call it a day; it takes active intent and investment to secure software systems.
Rogue templates
At one company I worked for, there was a vast array of legacy internal software. One particular piece of internal glue code had caught my eye: using the django templating engine over user provided inputs. To avoid particulars, let's say, for example, that the relevant feature was built to send an email each time a user registers. A user registers for the platform with a name and profile blurb, which should be included in the email to admins.
Now, the secure way to do this would be to create an email template:
{% include "email_header.txt" %}
User {{ username }} registered! Their profile blurb is:
{{ blurb }}
{% include "email_footer.txt" %}
and render it, using context to safely interpolate user input:
from django.template import engines
user = User(username="tmoney", blurb="Happy to be here")
django_engine = engines["django"]
template = django_engine.get_template("user_registration_email.txt")
rendered = template.render({"username": user.username, "blurb": user.blurb})
Suppose that a user provided a malicious blurb. At first, thinking about the template engine, I figured the worst would be a denial of service attack via resource exhaustion, e.g.
What I hear on the news:
{% for i in "x"|rjust:"10" %}
.. repeat for loops ..
blah
{% endfor %}
In the safe rendering approach, the template code in this blurb would not be executed, just safely output as-is into the email. This wouldn't be a security story if the company had used the safe approach, of course! Instead, the email templating code was written to include the blurb as part of the raw template, as in:
from django.template import engines
user = User(username="tmoney", blurb="Happy to be here")
django_engine = engines["django"]
template = django_engine.from_string(f"""
{{% include "email_header.txt" %}}
User {user.username} registered! Their profile blurb is:
{user.blurb}
{{% include "email_footer.txt" %}}
""")
rendered = template.render()
With this approach, the malicious blurb template code is executed by the template engine. Assuming that the worst that could be achieved is resource exhaustion, I filed this away as a low priority vulnerability. But it kept creeping up in my mind... knowing that django templates can call methods, and thinking about the django orm, if I could get a reference to a django model then I could easily chain that model to related models or possibly the entire table via model querysets.
After more digging, I finally found it: get_admin_log. This is automatically available in django templates when the admin site is enabled and vulnerable when the admin site is used - and you are using it, right, that's part of the django batteries included deal! 1
This template tag queries recent admin actions (specifically, the LogEntry model), think creations, edits, deletions. Handily, it links to the user who made the change, and that user is presumably a superuser, and presumably has links to most other objects in the system... now we are cooking!
Let's think through some of the possibilities for exploitation. With access to any User object, it's easy to access all user objects, via django meta model magic: user._meta.model.objects.all(). From there, one may head straight for deleting objects, including following the model meta magic train to related models and wiping much of the database.
Fortunately, django is ahead of us on this front. First, "private" properties beginning with underscores are not accessible in template lookups. Second, common side-effect methods like queryset.delete() or model.delete() are marked with a special flag that prevent execution in template rendering (via alters_data, see the docs).
That excludes the most obvious exploit of wiping tables via the standard django orm. It doesn't exclude rendering data, though, in the case that attackers are able to view the results of the injected template code. And, given the relative obscurity of template injection protection (blacklist unsafe methods, rather than whitelist safe methods), it doesn't exclude developer defined model methods. It's common to define helper methods that overlay the very same side-effect methods like model.delete(), and such helper methods would be executable within the template.
Let's continue with the example, supposing that there's a custom method User.deactivate(), that is typically run when a user deactivates their account. Not thinking about the potential attack vector of template injection, this method is not marked unsafe for templates. Now, an attacker may provide a malicious blurb that deactivates admin users, locking them out of the system:
Nothing to see here :D
{% load log %}
{% get_admin_log 100 as log %}
{% for entry in log %}
{{ entry.user.deactivate }}
{% endfor %}
This is just one example. The true attack surface depends on the context provided to the rendering call (e.g. the current user), and the methods available on objects available within the context (whether directly provided or provided via template tags).
I'd classify this vulnerability as code injection leading to limited remote code execution. What made this stand out to me was the seemingly innocent django templating engine: with a naive mindset of, "it interpolates user inputs when rendering", an engineer misses the potentially catastrophic consequences of running the template engine on unsafe inputs.
Therefore my main lesson here is to always be mindful of processing untrusted input. Another important lesson is that if it seems insecure, then it's probably insecure - dig in, talk about it, and fix it. Engineering teams should cultivate a positive, blame-free culture to investigate, report, address and educate.
-
In Django 5.0+,
get_admin_logfails outside of admin views. This appears to have been (incidentally?) patched out. Note that this is just one, default-provided attack vector. It's common to provide context to render calls, and that context is likely to have access to model objects. ↩