I'm not aware of any real push to include Mathics in sagemath. I speculate it's because SageMath is mainly developed by research mathematicians and cryptographers, so performance is often a primary concern when evaluating components for inclusion. One of the reasons SageMath is the largest Cython project is because Cython makes it possible to utilize fast C/C++ libraries from Sage.
I have the impression that Mathics is currently not seriously concerned with performance. E.g., try this little microbenchmark in Mathics: "AbsoluteTiming[Sum[i, {i, 1, 100000}]]" or read their roadmap. This is fine -- there are many interesting applications of the Mathematica programming language where performance isn't important, e.g., carefully stepping through some symbolic operations with expressions. However, for Sage the motivation of most developers is cutting edge research math, and for that, performance is almost always very important. Performance is why Sage also implements a lot of similar functionality to Sympy rather than just using sympy for that functionality -- since sympy can be relatively slow due to their priority to be easy to install, which is definitely not a priority for SageMath.
The mission statement for SageMath is to be a viable alternative to Mathematica, Matlab, Magma, and Maple, but I never meant that to mean being a clone (e.g., directly running Mathematica code), but instead just an alternative in the sense that it can support research built on open source mathematical software that might otherwise be done with those closed source programs.
It's an interesting exercise to think about why the performance of Sum[i, {i, 1, 100000}] differs between Mathics and MMA. Mathics just calls down to sympy, which I think just does the sum in Python [1]. Mthematica (likely) identifies your sum as the 100000th triangular number, and computes it directly in native code, since I know Mathematica relies heavily on standard tables of summations/integrals/etc. [2]
Pure Python is on the order of 1000x faster at computing that sum by brute force than Mathics. This suggests that perhaps some of the basic optimizations one does when implementing a language, to get even the most minimal level of performance, haven't yet happened with Mathics. For example, when we added asymptotically fast arbitrary precision integers to SageMath (by wrapping GMP, then later MPIR), we had to implement an "integer pool" since otherwise a lot of everyday microbenchmarks were far too slow.